Recently, the success of pre-training in text domain has been fully extended to vision, audio, and cross-modal scenarios. The proposed pre-training models of different modalities are showing a rising trend of homogeneity in their model structures, which brings the opportunity to implement different pre-training models within a uniform framework. In this paper, we present TencentPretrain, a toolkit supporting pre-training models of different modalities. The core feature of TencentPretrain is the modular design. The toolkit uniformly divides pre-training models into 5 components: embedding, encoder, target embedding, decoder, and target. As almost all of common modules are provided in each component, users can choose the desired modules from different components to build a complete pre-training model. The modular design enables users to efficiently reproduce existing pre-training models or build brand-new one. We test the toolkit on text, vision, and audio benchmarks and show that it can match the performance of the original implementations.
translated by 谷歌翻译
联合学习(FL)是一个新兴的隐私机器学习范式(ML)。 FL的一种重要类型是Cross-Silo FL,它使少数组织能够通过在本地保密数据并在中央参数服务器上汇总权重来合作训练共享模型。但是,在实践中,中央服务器可能容易受到恶意攻击或软件故障的影响。为了解决这个问题,在本文中,我们提出了DEFL,这是一个新颖的分散体重聚集框架,用于交叉silo fl。 DEFL通过在每个参与节点上汇总权重来消除中央服务器,并且仅在所有节点之间维护并同步当前的训练回合的权重。我们使用Multi-Krum来启用诚实节点的正确权重,并使用HotStuff来确保训练循环数和权重的一致性。此外,我们从理论上分析了DEFL的拜占庭式容错,收敛性和复杂性。我们对两个广泛的公共数据集进行了广泛的实验,即CIFAR-10和Sentiment140,以评估DEFL的性能。结果表明,与最先进的分散FL方法相比,DEFL可以防御通用的威胁模型,并以最小的精度损失损失降低了100倍的存储空间和最多减少网络开销的12倍。
translated by 谷歌翻译
最近的几项研究指出,现有的视觉问题回答(VQA)模型严重遭受了先前的问题的困扰,这是指捕获问题类型和答案之间的表面统计相关性,而忽略了图像内容。通过创建精致的模型或引入额外的视觉注释,已经致力于加强图像依赖性。但是,这些方法无法充分探索视觉提示如何显式影响学习的答案表示,这对于减轻语言的依赖至关重要。此外,他们通常强调对学习的答案表示形式的班级歧视,这忽略了更精细的实例级别模式,并要求进一步优化。在本文中,我们从视觉扰动校准的角度提出了一种新颖的协作学习方案,该方案可以更好地研究细粒度的视觉效果,并通过学习实例级别的特征来减轻语言的先验问题。具体而言,我们设计了一个视觉控制器来构建具有不同扰动范围的两种策划图像,基于该图像的协作学习内置不变性和实体歧视的协作学习由两个精心设计的歧视者实现。此外,我们在潜在空间上实施信息瓶颈调制器,以进一步减轻偏见和表示校准。我们将视觉扰动感知框架强加于三个正统基准,并将实验结果对两个诊断性VQA-CP基准数据集进行了实验结果,显然表明了其有效性。此外,我们还证明了它在平衡的VQA基准上的鲁棒性是合理的。
translated by 谷歌翻译
视觉问题回答(VQA)本质上是从根本上组成的,许多问题仅通过将它们分解为模块化子问题就可以回答。最新提出的神经模块网络(NMN)采用此策略来问答案,而在现成的布局解析器或有关网络体系结构设计的其他专家政策中,而不是从数据中学习。这些策略导致对输入的语义复杂差异的适应性不令人满意,从而阻碍了模型的表示能力和概括性。为了解决这个问题,我们提出了一个语义吸引的模块化胶囊路由框架,称为Super,以更好地捕获特定实例的视觉 - 语义特征并完善预测的判别性表示。特别是,在超级网络的每一层中都定制了五个功能强大的专用模块以及动态路由器,并构造了紧凑的路由空间,使得可以充分利用各种可自定义的路由,并且可以明确校准视觉声称表示。我们相对证明,我们提出的超级方案在五个基准数据集以及参数效率优势上的有效性和概括能力合理。值得强调的是,这项工作不是在VQA中追求最先进的结果。取而代之的是,我们希望我们的模型有责任为VQA提供建筑学习和表示校准的新颖观点。
translated by 谷歌翻译
Masked image modeling (MIM) performs strongly in pre-training large vision Transformers (ViTs). However, small models that are critical for real-world applications cannot or only marginally benefit from this pre-training approach. In this paper, we explore distillation techniques to transfer the success of large MIM-based pre-trained models to smaller ones. We systematically study different options in the distillation framework, including distilling targets, losses, input, network regularization, sequential distillation, etc, revealing that: 1) Distilling token relations is more effective than CLS token- and feature-based distillation; 2) An intermediate layer of the teacher network as target perform better than that using the last layer when the depth of the student mismatches that of the teacher; 3) Weak regularization is preferred; etc. With these findings, we achieve significant fine-tuning accuracy improvements over the scratch MIM pre-training on ImageNet-1K classification, using all the ViT-Tiny, ViT-Small, and ViT-base models, with +4.2%/+2.4%/+1.4% gains, respectively. Our TinyMIM model of base size achieves 52.2 mIoU in AE20K semantic segmentation, which is +4.1 higher than the MAE baseline. Our TinyMIM model of tiny size achieves 79.6% top-1 accuracy on ImageNet-1K image classification, which sets a new record for small vision models of the same size and computation budget. This strong performance suggests an alternative way for developing small vision Transformer models, that is, by exploring better training methods rather than introducing inductive biases into architectures as in most previous works. Code is available at https://github.com/OliverRensu/TinyMIM.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译
We present Muse, a text-to-image Transformer model that achieves state-of-the-art image generation performance while being significantly more efficient than diffusion or autoregressive models. Muse is trained on a masked modeling task in discrete token space: given the text embedding extracted from a pre-trained large language model (LLM), Muse is trained to predict randomly masked image tokens. Compared to pixel-space diffusion models, such as Imagen and DALL-E 2, Muse is significantly more efficient due to the use of discrete tokens and requiring fewer sampling iterations; compared to autoregressive models, such as Parti, Muse is more efficient due to the use of parallel decoding. The use of a pre-trained LLM enables fine-grained language understanding, translating to high-fidelity image generation and the understanding of visual concepts such as objects, their spatial relationships, pose, cardinality etc. Our 900M parameter model achieves a new SOTA on CC3M, with an FID score of 6.06. The Muse 3B parameter model achieves an FID of 7.88 on zero-shot COCO evaluation, along with a CLIP score of 0.32. Muse also directly enables a number of image editing applications without the need to fine-tune or invert the model: inpainting, outpainting, and mask-free editing. More results are available at https://muse-model.github.io
translated by 谷歌翻译
Learning the underlying distribution of molecular graphs and generating high-fidelity samples is a fundamental research problem in drug discovery and material science. However, accurately modeling distribution and rapidly generating novel molecular graphs remain crucial and challenging goals. To accomplish these goals, we propose a novel Conditional Diffusion model based on discrete Graph Structures (CDGS) for molecular graph generation. Specifically, we construct a forward graph diffusion process on both graph structures and inherent features through stochastic differential equations (SDE) and derive discrete graph structures as the condition for reverse generative processes. We present a specialized hybrid graph noise prediction model that extracts the global context and the local node-edge dependency from intermediate graph states. We further utilize ordinary differential equation (ODE) solvers for efficient graph sampling, based on the semi-linear structure of the probability flow ODE. Experiments on diverse datasets validate the effectiveness of our framework. Particularly, the proposed method still generates high-quality molecular graphs in a limited number of steps.
translated by 谷歌翻译
Deep neural networks are vulnerable to adversarial attacks. In this paper, we take the role of investigators who want to trace the attack and identify the source, that is, the particular model which the adversarial examples are generated from. Techniques derived would aid forensic investigation of attack incidents and serve as deterrence to potential attacks. We consider the buyers-seller setting where a machine learning model is to be distributed to various buyers and each buyer receives a slightly different copy with same functionality. A malicious buyer generates adversarial examples from a particular copy $\mathcal{M}_i$ and uses them to attack other copies. From these adversarial examples, the investigator wants to identify the source $\mathcal{M}_i$. To address this problem, we propose a two-stage separate-and-trace framework. The model separation stage generates multiple copies of a model for a same classification task. This process injects unique characteristics into each copy so that adversarial examples generated have distinct and traceable features. We give a parallel structure which embeds a ``tracer'' in each copy, and a noise-sensitive training loss to achieve this goal. The tracing stage takes in adversarial examples and a few candidate models, and identifies the likely source. Based on the unique features induced by the noise-sensitive loss function, we could effectively trace the potential adversarial copy by considering the output logits from each tracer. Empirical results show that it is possible to trace the origin of the adversarial example and the mechanism can be applied to a wide range of architectures and datasets.
translated by 谷歌翻译
This paper presents a novel framework for planning in unknown and occluded urban spaces. We specifically focus on turns and intersections where occlusions significantly impact navigability. Our approach uses an inpainting model to fill in a sparse, occluded, semantic lidar point cloud and plans dynamically feasible paths for a vehicle to traverse through the open and inpainted spaces. We demonstrate our approach using a car's lidar data with real-time occlusions, and show that by inpainting occluded areas, we can plan longer paths, with more turn options compared to without inpainting; in addition, our approach more closely follows paths derived from a planner with no occlusions (called the ground truth) compared to other state of the art approaches.
translated by 谷歌翻译