As natural language processing (NLP) for gender bias becomes a significant interdisciplinary topic, the prevalent data-driven techniques such as large-scale language models suffer from data inadequacy and biased corpus, especially for languages with insufficient resources such as Chinese. To this end, we propose a Chinese cOrpus foR Gender bIas Probing and Mitigation CORGI-PM, which contains 32.9k sentences with high-quality labels derived by following an annotation scheme specifically developed for gender bias in the Chinese context. Moreover, we address three challenges for automatic textual gender bias mitigation, which requires the models to detect, classify, and mitigate textual gender bias. We also conduct experiments with state-of-the-art language models to provide baselines. To our best knowledge, CORGI-PM is the first sentence-level Chinese corpus for gender bias probing and mitigation.
translated by 谷歌翻译
Vision transformers (ViTs) have achieved impressive results on various computer vision tasks in the last several years. In this work, we study the capability of frozen ViTs, pretrained only on visual data, to generalize to audio-visual data without finetuning any of its original parameters. To do so, we propose a latent audio-visual hybrid (LAVISH) adapter that adapts pretrained ViTs to audio-visual tasks by injecting a small number of trainable parameters into every layer of a frozen ViT. To efficiently fuse visual and audio cues, our LAVISH adapter uses a small set of latent tokens, which form an attention bottleneck, thus, eliminating the quadratic cost of standard cross-attention. Compared to the existing modality-specific audio-visual methods, our approach achieves competitive or even better performance on various audio-visual tasks while using fewer tunable parameters and without relying on costly audio pretraining or external audio encoders. Our code is available at https://genjib.github.io/project_page/LAVISH/
translated by 谷歌翻译
Real-world image super-resolution (RISR) has received increased focus for improving the quality of SR images under unknown complex degradation. Existing methods rely on the heavy SR models to enhance low-resolution (LR) images of different degradation levels, which significantly restricts their practical deployments on resource-limited devices. In this paper, we propose a novel Dynamic Channel Splitting scheme for efficient Real-world Image Super-Resolution, termed DCS-RISR. Specifically, we first introduce the light degradation prediction network to regress the degradation vector to simulate the real-world degradations, upon which the channel splitting vector is generated as the input for an efficient SR model. Then, a learnable octave convolution block is proposed to adaptively decide the channel splitting scale for low- and high-frequency features at each block, reducing computation overhead and memory cost by offering the large scale to low-frequency features and the small scale to the high ones. To further improve the RISR performance, Non-local regularization is employed to supplement the knowledge of patches from LR and HR subspace with free-computation inference. Extensive experiments demonstrate the effectiveness of DCS-RISR on different benchmark datasets. Our DCS-RISR not only achieves the best trade-off between computation/parameter and PSNR/SSIM metric, and also effectively handles real-world images with different degradation levels.
translated by 谷歌翻译
This is a brief technical report of our proposed method for Multiple-Object Tracking (MOT) Challenge in Complex Environments. In this paper, we treat the MOT task as a two-stage task including human detection and trajectory matching. Specifically, we designed an improved human detector and associated most of detection to guarantee the integrity of the motion trajectory. We also propose a location-wise matching matrix to obtain more accurate trace matching. Without any model merging, our method achieves 66.672 HOTA and 93.971 MOTA on the DanceTrack challenge dataset.
translated by 谷歌翻译
The deep learning community has witnessed an exponentially growing interest in self-supervised learning (SSL). However, it still remains unexplored how to build a framework for learning useful representations of raw music waveforms in a self-supervised manner. In this work, we design Music2Vec, a framework exploring different SSL algorithmic components and tricks for music audio recordings. Our model achieves comparable results to the state-of-the-art (SOTA) music SSL model Jukebox, despite being significantly smaller with less than 2% of parameters of the latter. The model will be released on Huggingface(Please refer to: https://huggingface.co/m-a-p/music2vec-v1)
translated by 谷歌翻译
Due to its importance in facial behaviour analysis, facial action unit (AU) detection has attracted increasing attention from the research community. Leveraging the online knowledge distillation framework, we propose the ``FANTrans" method for AU detection. Our model consists of a hybrid network of convolution and transformer blocks to learn per-AU features and to model AU co-occurrences. The model uses a pre-trained face alignment network as the feature extractor. After further transformation by a small learnable add-on convolutional subnet, the per-AU features are fed into transformer blocks to enhance their representation. As multiple AUs often appear together, we propose a learnable attention drop mechanism in the transformer block to learn the correlation between the features for different AUs. We also design a classifier that predicts AU presence by considering all AUs' features, to explicitly capture label dependencies. Finally, we make the attempt of adapting online knowledge distillation in the training stage for this task, further improving the model's performance. Experiments on the BP4D and DISFA datasets demonstrating the effectiveness of proposed method.
translated by 谷歌翻译
Fairness has become a trending topic in natural language processing (NLP), which addresses biases targeting certain social groups such as genders and religions. However, regional bias in language models (LMs), a long-standing global discrimination problem, still remains unexplored. This paper bridges the gap by analysing the regional bias learned by the pre-trained language models that are broadly used in NLP tasks. In addition to verifying the existence of regional bias in LMs, we find that the biases on regional groups can be strongly influenced by the geographical clustering of the groups. We accordingly propose a HiErarchical Regional Bias evaluation method (HERB) utilising the information from the sub-region clusters to quantify the bias in pre-trained LMs. Experiments show that our hierarchical metric can effectively evaluate the regional bias with respect to comprehensive topics and measure the potential regional bias that can be propagated to downstream tasks. Our codes are available at https://github.com/Bernard-Yang/HERB.
translated by 谷歌翻译
在过去的十年中,修剪神经网络已经流行,当时证明可以安全地从现代神经网络中安全地删除大量权重,而不会损害准确性。从那时起,已经提出了许多修剪方法,每种方法都比以前更好。如今,许多最先进的技术(SOTA)技术依赖于使用重要性得分的复杂修剪方法,通过反向传播获得反馈或在其他等方面获得基于启发式的修剪规则。我们质疑这种引入复杂性的模式,以获得更好的修剪结果。我们对这些SOTA技术基准针对全球幅度修剪(全球MP)(一个天真的修剪基线),以评估是否确实需要复杂性来实现更高的性能。全球MP按其幅度顺序排列权重,并修理最小的权重。因此,它以香草形式是最简单的修剪技术之一。令人惊讶的是,我们发现香草全球MP的表现优于所有其他SOTA技术,并取得了新的SOTA结果。它还可以在拖叉稀疏方面取得良好的性能,当以逐渐修剪的方式进行修剪时,我们发现这是增强的。我们还发现,全球MP在具有卓越性能的任务,数据集和模型之间可以推广。此外,许多修剪算法以高稀疏速率遇到的一个常见问题,即可以通过设置要保留在每层中的最小权重阈值来轻松固定在全球MP中。最后,与许多其他SOTA技术不同,全球MP不需要任何其他特定算法的超参数,并且非常简单地调整和实施。我们在各种模型(WRN-28-8,Resnet-32,Resnet-50,Mobilenet-V1和FastGrnn)和多个数据集(CIFAR-10,Imagenet和HAR-2)上展示了我们的发现。代码可在https://github.com/manasgupta-1/globalmp上找到。
translated by 谷歌翻译
提示将下游应用程序作为语言建模任务施放,与使用预训练的模型进行标准微调相比,已显示出样本有效的效率。但是,提示的一个陷阱是需要手动设计的模式,其结果可能是不直觉的,需要大量的验证集来调整。为了应对挑战,我们提出了一种全自动提示方法Autoseq:(1)我们在序列到序列模型上采用自然语言提示,从而实现自由形式生成和更大的标签搜索空间; (2)我们提出了标签序列 - 无限长度的短语以口头表达标签 - 这消除了手动模板的需求,并且比单个标签单词更具有表现力; (3)我们使用Beam Search自动生成大量的标签序列候选物,并提出对比度重新排列以获得最佳组合。 Autoseq显着胜过其他无手动设计方法,例如软提示调整,适配器调整和自动搜索单个标签单词;生成的标签序列比各种任务上的精选手动序列更好。我们的方法揭示了几次学习中序列模型的潜力,并阐明了通用通用和自动提示的途径。本文的源代码可以从https://github.com/thunlp/seq2seq-prompt获得。
translated by 谷歌翻译
探针车的使用日益增长会产生大量的GNS数据。受卫星定位技术的限制,进一步提高地图匹配的准确性是具有挑战性的工作,尤其是对于低频轨迹。当与轨迹匹配时,自我车辆的当前旅行时空信息对于数据量最少而言最有用。此外,还有大量其他数据,例如其他车辆的状态和过去的预测结果,但是很难提取有用的信息来匹配地图和推断路径。大多数地图匹配研究仅使用自我车辆的数据,而忽略了其他车辆的数据。基于它,本文设计了一种新的地图匹配方法,以充分利用“大数据”。首先,我们根据与本匹配探针的空间和时间距离将所有数据分为四组,这使我们能够对其有用性进行排序。然后,我们设计了三种不同的方法来从它们中提取有价值的信息(分数):速度和轴承的分数,历史用法的分数以及使用光谱图马尔可夫中立网络的交通状态分数。最后,我们使用修改后的TOP-K最短路径方法来搜索椭圆区域内的候选路径,然后使用Fused分数推断路径(投影位置)。我们使用中国的现实世界数据集测试了针对基线算法的建议方法。结果表明,所有评分方法都可以增强地图匹配的精度。此外,我们的方法优于其他方法,尤其是当GNSS探测频率小于0.01 Hz时。
translated by 谷歌翻译