无监督的生成的虚拟人类具有各种外观和动画姿势对于创建3D人体化身和其他AR/VR应用非常重要。现有方法要么仅限于刚性对象建模,要么不生成,因此无法合成高质量的虚拟人类并使它们进行动画化。在这项工作中,我们提出了Avatargen,这是第一种不仅可以具有不同外观的非刚性人类产生的方法,而且还可以完全控制姿势和观点,同时仅需要2D图像进行训练。具体而言,它通过利用粗糙的人体模型作为代理将观察空间扭曲到规范空间下的标准头像,将最近的3D甘斯扩展到了人类的衣服。为了建模非刚性动力学,它引入了一个变形网络,以学习规范空间中的姿势依赖性变形。为了提高生成的人类化身的几何质量,它利用签名距离字段作为几何表示,从而可以从几何学学习上的身体模型中进行更直接的正则化。从这些设计中受益,我们的方法可以生成具有高质量外观和几何形状建模的动画人体化身,从而极大地表现了先前的3D gan。此外,它有能力用于许多应用,例如单视重构造,复活和文本引导的合成。代码和预培训模型将可用。
translated by 谷歌翻译
生成的开放域对话系统可以从外部知识中受益,但是缺乏外部知识资源和寻找相关知识的困难限制了该技术的发展。为此,我们使用动态服务信息提出了一个知识驱动的对话任务。具体而言,我们使用大量的服务API,可以作为外部知识来源提供高覆盖范围和时空敏感性。对话系统生成查询以请求外部服务以及用户信息,获取相关知识,并基于此知识生成响应。为了实现此方法,我们收集并发布了第一个开放式域中国服务知识对话数据集Dusinc。同时,我们构建了一个基线模型柏拉图 - 线,该模型实现了对话的自动利用。自动评估和人类评估都表明,我们提出的新方法可以显着改善开放域对话的效果,并且与对话预培训模型Plato-2相比,人类评估中的会话级总数提高了59.29%。数据集和基准模型将被开源。
translated by 谷歌翻译
与基于现代聚类算法的完全监督的REID方法相比,未经监督的人重新识别(U-Reid)最近达到了竞争性能。然而,这种基于聚类的方案对大规模数据集来说变得对计算方式。如何探讨如何有效利用具有有限计算资源的无限未标记的数据,以便更好地进行更好的U-Reid。在本文中,我们首次尝试大规模U-Reid并提出一个“大型任务的小数据”范式被称为Meta聚类学习(MCL)。 MCL仅通过群集伪标记整个未标记数据的子集,以节省第一期训练的计算。之后,被学习的集群中心称为我们的MCL中的元原型,被视为代理注释器,以便轻松注释其它未标记数据以进一步抛光模型。为了缓解抛光阶段的潜在嘈杂的标签问题,我们强制执行两个精心设计的损失限制,以保证境内统一的一致性和相互识别的强烈相关性。对于多个广泛使用的U-REID基准测试,我们的方法显着节省了计算成本,同时与先前作品相比,实现了可比或更好的性能。
translated by 谷歌翻译
Despite the recent visually-pleasing results achieved, the massive computational cost has been a long-standing flaw for diffusion probabilistic models (DPMs), which, in turn, greatly limits their applications on resource-limited platforms. Prior methods towards efficient DPM, however, have largely focused on accelerating the testing yet overlooked their huge complexity and sizes. In this paper, we make a dedicated attempt to lighten DPM while striving to preserve its favourable performance. We start by training a small-sized latent diffusion model (LDM) from scratch, but observe a significant fidelity drop in the synthetic images. Through a thorough assessment, we find that DPM is intrinsically biased against high-frequency generation, and learns to recover different frequency components at different time-steps. These properties make compact networks unable to represent frequency dynamics with accurate high-frequency estimation. Towards this end, we introduce a customized design for slim DPM, which we term as Spectral Diffusion (SD), for light-weight image synthesis. SD incorporates wavelet gating in its architecture to enable frequency dynamic feature extraction at every reverse steps, and conducts spectrum-aware distillation to promote high-frequency recovery by inverse weighting the objective based on spectrum magni tudes. Experimental results demonstrate that, SD achieves 8-18x computational complexity reduction as compared to the latent diffusion models on a series of conditional and unconditional image generation tasks while retaining competitive image fidelity.
translated by 谷歌翻译
MetaFormer, the abstracted architecture of Transformer, has been found to play a significant role in achieving competitive performance. In this paper, we further explore the capacity of MetaFormer, again, without focusing on token mixer design: we introduce several baseline models under MetaFormer using the most basic or common mixers, and summarize our observations as follows: (1) MetaFormer ensures solid lower bound of performance. By merely adopting identity mapping as the token mixer, the MetaFormer model, termed IdentityFormer, achieves >80% accuracy on ImageNet-1K. (2) MetaFormer works well with arbitrary token mixers. When specifying the token mixer as even a random matrix to mix tokens, the resulting model RandFormer yields an accuracy of >81%, outperforming IdentityFormer. Rest assured of MetaFormer's results when new token mixers are adopted. (3) MetaFormer effortlessly offers state-of-the-art results. With just conventional token mixers dated back five years ago, the models instantiated from MetaFormer already beat state of the art. (a) ConvFormer outperforms ConvNeXt. Taking the common depthwise separable convolutions as the token mixer, the model termed ConvFormer, which can be regarded as pure CNNs, outperforms the strong CNN model ConvNeXt. (b) CAFormer sets new record on ImageNet-1K. By simply applying depthwise separable convolutions as token mixer in the bottom stages and vanilla self-attention in the top stages, the resulting model CAFormer sets a new record on ImageNet-1K: it achieves an accuracy of 85.5% at 224x224 resolution, under normal supervised training without external data or distillation. In our expedition to probe MetaFormer, we also find that a new activation, StarReLU, reduces 71% FLOPs of activation compared with GELU yet achieves better performance. We expect StarReLU to find great potential in MetaFormer-like models alongside other neural networks.
translated by 谷歌翻译
Existing fine-tuning methods either tune all parameters of the pre-trained model (full fine-tuning), which is not efficient, or only tune the last linear layer (linear probing), which suffers a significant accuracy drop compared to the full fine-tuning. In this paper, we propose a new parameter-efficient fine-tuning method termed as SSF, representing that researchers only need to Scale and Shift the deep Features extracted by a pre-trained model to catch up with the performance of full fine-tuning. In this way, SSF also surprisingly outperforms other parameter-efficient fine-tuning approaches even with a smaller number of tunable parameters. Furthermore, different from some existing parameter-efficient fine-tuning methods (e.g., Adapter or VPT) that introduce the extra parameters and computational cost in the training and inference stages, SSF only adds learnable parameters during the training stage, and these additional parameters can be merged into the original pre-trained model weights via re-parameterization in the inference phase. With the proposed SSF, our model obtains 2.46% (90.72% vs. 88.54%) and 11.48% (73.10% vs. 65.57%) performance improvement on FGVC and VTAB-1k in terms of Top-1 accuracy compared to the full fine-tuning but only fine-tuning about 0.3M parameters. We also conduct amounts of experiments in various model families (CNNs, Transformers, and MLPs) and datasets. Results on 26 image classification datasets in total and 3 robustness & out-of-distribution datasets show the effectiveness of SSF. Code is available at https://github.com/dongzelian/SSF.
translated by 谷歌翻译
在多人2D姿势估计中,自下而上的方法同时预测了所有人的姿势,与自上而下的方法不同,不依赖于人类的检测。但是,与现有的自上而下方法相比,SOTA自下而上的方法的精度仍然不如较低。这是由于预测的人类姿势是根据不一致的人类边界箱中心进行回归的,并且缺乏人类规范的正常化,从而导致预测的人类姿势被遗漏了不准确和小规模的人。为了推动自下而上的姿势估计的信封,我们首先提出了多尺度训练,以增强网络以通过单尺度测试来处理规模变化,尤其是对于小规模的人。其次,我们介绍了双解剖中心(即头部和身体),在这里我们可以更准确,可靠地预测人类的姿势,尤其是对于小规模的人。此外,现有的自下而上方法采用多尺度测试来以多个额外的前向通行证的价格提高姿势估计的准确性,这削弱了自下而上方法的效率,与自上而下的方法相比,核心强度。相比之下,我们的多尺度训练使该模型能够预测单个前向通行证(即单尺度测试)中的高质量姿势。我们的方法在边界框的精度方面取得了38.4 \%的改进,在边界框上进行了39.1 \%的改进,以对可可的具有挑战性的小规模人群进行对现状(SOTA)的回忆(SOTA)。对于人类姿势AP评估,我们在带有单尺度测试的可可测试-DEV集中实现了新的SOTA(71.0 AP)。我们还在跨数据库评估中在Ochuman数据集上实现了最高的性能(40.3 AP)。
translated by 谷歌翻译
在本文中,我们探讨了一个新的知识障碍问题,称为联合选择性聚合(FEDSA)。 FEDSA的目的是在几位分散的教师的帮助下培训学生模型,以完成一项新任务,他们的预培训任务和数据是不同且不可知的。我们调查此类问题设置的动机源于最近的模型共享困境。许多研究人员或机构已经在培训大型且称职的网络上花费了巨大的资源。由于隐私,安全或知识产权问题,他们也无法分享自己的预培训模型,即使他们希望为社区做出贡献。拟议的FEDSA提供了解决这一困境的解决方案,并使其更进一步,因为学识渊博的学生可以专门从事与所有老师不同的新任务。为此,我们提出了一种处理FEDSA的专门战略。具体而言,我们的学生培训过程是由一种新型的基于显着性的方法驱动的,该方法可以适应教师作为参与者,并将其代表性能力融入到学生中。为了评估FEDSA的有效性,我们在单任务和多任务设置上进行实验。实验结果表明,FEDSA有效地将分散模型的知识融合在一起,并将竞争性能达到集中式基准。
translated by 谷歌翻译
异常检测旨在识别正常数据分布的偏差样本。对比学习提供了一种成功的样本表示方式,可以有效地歧视异常。但是,当在半监督环境下设置的训练中被未标记的异常样本污染时,当前基于对比的方法通常1)忽略训练数据之间的全面关系,导致次优的性能,2)需要微调,导致低效率的低效率。为了解决上述两个问题,在本文中,我们提出了一种新型的分层半监督对比学习(HSCL)框架,以抗污染异常检测。具体而言,HSCL分层调节了三个互补关系:样本到样本,样本到原型型和正常关系,通过对受污染数据的全面探索,扩大了正常样本和异常样本之间的歧视。此外,HSCL是一种端到端的学习方法,可以在不进行微调的情况下有效地学习判别性表示。 HSCL在多种方案中实现了最先进的性能,例如单级分类和跨数据库检测。广泛的消融研究进一步验证了每个考虑的关系的有效性。该代码可在https://github.com/gaoangw/hscl上找到。
translated by 谷歌翻译
最先进的参数和非参数样式转移方法容易导致由于全局统计的对准而导致的本地样式模式,或者由于补丁不匹配而导致的不愉快的人工制品。在本文中,我们研究了一种新型的半参数神经风格转移框架,可减轻参数和非参数风格的缺乏。我们方法的核心思想是使用图神经网络(GNN)建立准确且细粒的内容样式对应关系。为此,我们开发了一个详细的GNN模型,其中包含内容和样式的本地补丁作为图形顶点。然后,将样式转移过程建模为基于注意力的异质消息,以可学习的方式在样式和内容节点之间传递,从而导致本地补丁级别的自适应多一对一风格的相关性。此外,引入了详细的可变形图卷积操作,以进行跨尺度样式符合匹配。实验结果表明,所提出的半参数图像样式化方法可为具有挑战性的样式模式产生令人鼓舞的结果,从而保留了全球外观和精美的细节。此外,通过控制推理阶段的边缘数量,提出的方法还触发了新的功能,例如使用单个模型的多元化基于斑块的风格化。
translated by 谷歌翻译