The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Recently, the success of pre-training in text domain has been fully extended to vision, audio, and cross-modal scenarios. The proposed pre-training models of different modalities are showing a rising trend of homogeneity in their model structures, which brings the opportunity to implement different pre-training models within a uniform framework. In this paper, we present TencentPretrain, a toolkit supporting pre-training models of different modalities. The core feature of TencentPretrain is the modular design. The toolkit uniformly divides pre-training models into 5 components: embedding, encoder, target embedding, decoder, and target. As almost all of common modules are provided in each component, users can choose the desired modules from different components to build a complete pre-training model. The modular design enables users to efficiently reproduce existing pre-training models or build brand-new one. We test the toolkit on text, vision, and audio benchmarks and show that it can match the performance of the original implementations.
translated by 谷歌翻译
Although physics-informed neural networks(PINNs) have progressed a lot in many real applications recently, there remains problems to be further studied, such as achieving more accurate results, taking less training time, and quantifying the uncertainty of the predicted results. Recent advances in PINNs have indeed significantly improved the performance of PINNs in many aspects, but few have considered the effect of variance in the training process. In this work, we take into consideration the effect of variance and propose our VI-PINNs to give better predictions. We output two values in the final layer of the network to represent the predicted mean and variance respectively, and the latter is used to represent the uncertainty of the output. A modified negative log-likelihood loss and an auxiliary task are introduced for fast and accurate training. We perform several experiments on a wide range of different problems to highlight the advantages of our approach. The results convey that our method not only gives more accurate predictions but also converges faster.
translated by 谷歌翻译
由于缺乏电感偏见,视觉变压器(VIT)通常被认为比卷积神经网络(CNN)少。因此,最近的工作将卷积作为插件模块,并将其嵌入各种Vit对应物中。在本文中,我们认为卷积内核执行信息聚合以连接所有令牌。但是,如果这种明确的聚合能够以更均匀的方式起作用,则实际上是轻重量VIT的不必要的。受到这一点的启发,我们将Lightvit作为新的轻巧VIT家族,以在不卷积的情况下在纯变压器块上实现更好的准确性效率平衡。具体而言,我们将一个全球但有效的聚合方案引入了VIT的自我注意力和前馈网络(FFN),其中引入了其他可学习的令牌以捕获全球依赖性;在令牌嵌入上施加了双维通道和空间注意力。实验表明,我们的模型在图像分类,对象检测和语义分割任务上取得了重大改进。例如,我们的LightVit-T仅使用0.7G拖鞋的ImageNet上达到78.7%的精度,在GPU上的PVTV2-B0优于8.2%,而GPU的速度快11%。代码可在https://github.com/hunto/lightvit上找到。
translated by 谷歌翻译
Unlike existing knowledge distillation methods focus on the baseline settings, where the teacher models and training strategies are not that strong and competing as state-of-the-art approaches, this paper presents a method dubbed DIST to distill better from a stronger teacher. We empirically find that the discrepancy of predictions between the student and a stronger teacher may tend to be fairly severer. As a result, the exact match of predictions in KL divergence would disturb the training and make existing methods perform poorly. In this paper, we show that simply preserving the relations between the predictions of teacher and student would suffice, and propose a correlation-based loss to capture the intrinsic inter-class relations from the teacher explicitly. Besides, considering that different instances have different semantic similarities to each class, we also extend this relational match to the intra-class level. Our method is simple yet practical, and extensive experiments demonstrate that it adapts well to various architectures, model sizes and training strategies, and can achieve state-of-the-art performance consistently on image classification, object detection, and semantic segmentation tasks. Code is available at: https://github.com/hunto/DIST_KD .
translated by 谷歌翻译
Unsupervised pixel-level defective region segmentation is an important task in image-based anomaly detection for various industrial applications. The state-of-the-art methods have their own advantages and limitations: matrix-decomposition-based methods are robust to noise but lack complex background image modeling capability; representation-based methods are good at defective region localization but lack accuracy in defective region shape contour extraction; reconstruction-based methods detected defective region match well with the ground truth defective region shape contour but are noisy. To combine the best of both worlds, we present an unsupervised patch autoencoder based deep image decomposition (PAEDID) method for defective region segmentation. In the training stage, we learn the common background as a deep image prior by a patch autoencoder (PAE) network. In the inference stage, we formulate anomaly detection as an image decomposition problem with the deep image prior and domain-specific regularizations. By adopting the proposed approach, the defective regions in the image can be accurately extracted in an unsupervised fashion. We demonstrate the effectiveness of the PAEDID method in simulation studies and an industrial dataset in the case study.
translated by 谷歌翻译
自然语言理解(NLU)模型倾向于依靠虚假的相关性(即数据集偏见)来在分布数据集上实现高性能,但在分布外部的数据集中的性能差。大多数现有的偏见方法通常都以偏见的特征(即引起这种虚假相关性的表面特征)来识别和削弱这些样品。但是,下降加权这些样品阻碍了从这些样品的无偏见部分学习的模型。为了应对这一挑战,在本文中,我们建议从特征空间的角度以细粒度的方式消除虚假的相关性。具体而言,我们引入了随机傅立叶特征和加权重采样,以将功能之间的依赖关系解释以减轻虚假相关性。在获得非相关的功能后,我们进一步设计了一种基于相互信息的方法来净化它们,这迫使模型学习与任务更相关的功能。对两个经过良好研究的NLU任务进行的广泛实验表明,我们的方法优于其他比较方法。
translated by 谷歌翻译
无人驾驶飞机(UAV)跟踪对于诸如交货和农业等广泛应用具有重要意义。该领域的先前基准分析主要集中在小规模的跟踪问题上,同时忽略了数据模式的类型,目标类别和方案的多样性以及所涉及的评估协议的数量,从而极大地隐藏了深度无人机跟踪的巨大功能。在这项工作中,我们提出了迄今为止最大的公共无人机跟踪基准Webuav-3M,以促进深度无人机跟踪器的开发和评估。 Webuav-3M在4,500个视频中包含超过330万帧,并提供223个高度多样化的目标类别。每个视频都通过有效且可扩展的半自动目标注释(SATA)管道密集注释。重要的是,要利用语言和音频的互补优势,我们通过提供自然语言规格和音频描述来丰富Webuav-3M。我们认为,这种增加将大大促进未来的研究,以探索语言功能和音频提示,用于多模式无人机跟踪。此外,构建了scenario约束(UTUSC)评估协议和七个具有挑战性的场景子测验集,以使社区能够开发,适应和评估各种类型的高级跟踪器。我们提供了43个代表性跟踪器的广泛评估和详细分析,并设想了深度无人机跟踪及其他领域的未来研究方向。数据集,工具包和基线结果可在\ url {https://github.com/983632847/webuav-3m}中获得。
translated by 谷歌翻译
光保护综合技术的快速进展达到了真实和操纵图像之间的边界开始模糊的临界点。最近,一个由Mega-Scale Deep Face Forgery DataSet,由290万个图像组成和221,247个视频的伪造网络已被释放。它是迄今为止的数据规模,操纵(7个图像级别方法,8个视频级别方法),扰动(36个独立和更混合的扰动)和注释(630万个分类标签,290万操纵区域注释和221,247个时间伪造段标签)。本文报告了Forgerynet-Face Forgery Analysis挑战2021的方法和结果,它采用了伪造的基准。模型评估在私人测试集上执行离线。共有186名参加比赛的参与者,11名队伍提交了有效的提交。我们将分析排名排名的解决方案,并展示一些关于未来工作方向的讨论。
translated by 谷歌翻译
自从搜索空间通常相当巨大(例如,$ 13 ^ {21}),训练单次NAS方法中的一个良好的Supernet很难。为了提高超网络的评估能力,一个贪婪的策略是采样良好的路径,让超标倾向于良好的路径并减轻其评估负担。然而,在实践中,由于良好路径的识别不够准确并且采样路径仍然围绕整个搜索空间散射,因此搜索仍然是效率效率低下。在本文中,我们利用显式路径滤波器来捕获路径的特征,并直接过滤那些弱的路径,从而可以更加贪婪地且有效地在缩小空间上实现搜索。具体地,基于良好的路径小于空间中的弱者的事实,我们认为“弱道”的标签将比多道路采样中的“良好路径”更自信和可靠。通过这种方式,我们因此将路径滤波器的训练施放在正面和未标记的(PU)学习范例中,并且还鼓励一个\ Texit {路径嵌入}作为更好的路径/操作表示,以增强学习过滤器的识别容量。通过这种嵌入的DINT,我们可以通过将类似的嵌入式汇总相似的操作进一步缩小搜索空间,搜索可以更高效和准确。大量实验验证了所提出的方法GredynaSv2的有效性。例如,我们获得的GreedynaSv2-L验证$ 81.1 \%$ 1 $ top-1在想象数据数据上的准确性,显着优于Reset-50强的基线。
translated by 谷歌翻译