Underwater automatic target recognition (UATR) has been a challenging research topic in ocean engineering. Although deep learning brings opportunities for target recognition on land and in the air, underwater target recognition techniques based on deep learning have lagged due to sensor performance and the size of trainable data. This letter proposed a framework for learning the visual representation of underwater acoustic imageries, which takes a transformer-based style transfer model as the main body. It could replace the low-level texture features of optical images with the visual features of underwater acoustic imageries while preserving their raw high-level semantic content. The proposed framework could fully use the rich optical image dataset to generate a pseudo-acoustic image dataset and use it as the initial sample to train the underwater acoustic target recognition model. The experiments select the dual-frequency identification sonar (DIDSON) as the underwater acoustic data source and also take fish, the most common marine creature, as the research subject. Experimental results show that the proposed method could generate high-quality and high-fidelity pseudo-acoustic samples, achieve the purpose of acoustic data enhancement and provide support for the underwater acoustic-optical images domain transfer research.
translated by 谷歌翻译
在深海勘探领域,声纳目前是唯一有效的长距离传感装置。复杂的水下环境,如噪声干扰,低目标强度或背景动态,对声纳成像带来了许多负面影响。其中,非线性强度的问题非常普遍。它也被称为声学传感器成像的各向异性,即当自主水下车辆(AUV)携带声纳从不同角度检测到相同的目标时,图像对之间的强度变化有时非常大,这使得传统匹配算法成为了传统的匹配算法几乎无效。但是,图像匹配是诸如导航,定位和映射等综合任务的基础。因此,获得稳健和准确的匹配结果是非常有价值的。本文提出了一种基于相位信息和深卷积特征的组合匹配方法。它具有两个出色的优势:一个是深度卷积特征可用于衡量声纳图像的本地和全球位置的相似性;另一种是可以在声纳图像的关键目标位置执行本地特征匹配。该方法不需要复杂的手动设计,并以关闭端到端的方式完成非线性强度声纳图像的匹配任务。特征匹配实验在AUV捕获的深海声纳图像上进行,结果表明我们的提议具有卓越的匹配精度和鲁棒性。
translated by 谷歌翻译
在深海勘探领域,声纳目前是唯一有效的长距离传感装置。复杂的水下环境,如噪声干扰,低目标强度或背景动态,对声纳成像带来了许多负面影响。其中,非线性强度的问题非常普遍。它也被称为声学成像的各向异性,即,当AUV携带声纳从不同角度检测到相同的目标时,图像对之间的强度差值有时非常大,这使得传统的匹配算法几乎无效。但是,图像匹配是诸如导航,定位和映射等综合任务的基础。因此,获得稳健和准确的匹配结果是非常有价值的。本文提出了一种基于相位信息和深卷积特征的组合匹配方法。它有两个出色的优势:一个是,可以使用深度卷积功能来衡量声纳图像的本地和全球位置的相似性;另一种是可以在声纳图像的关键目标位置执行本地特征匹配。该方法不需要复杂的手动设计,并以关闭端到端的方式完成非线性强度声纳图像的匹配任务。特征匹配实验在AUV捕获的深海声纳图像上进行,结果表明我们的建议具有良好的匹配准确性和鲁棒性。
translated by 谷歌翻译
大坝破洪水中波传播的计算预测是流体动力和水文学中的长期问题。到目前为止,基于圣人方程的常规数值模型是主要方法。在这里,我们表明,以最少的数据训练的机器学习模型可以帮助预测一维大坝破洪水的长期动态行为,其精度令人满意。为此,我们使用lax-wendroff数值方案为一维大坝洪水方案求解了圣人方程,并通过模拟结果训练储层计算机网络(RC-ESN),由模拟结果组成时间序列深度。我们展示了RC-ESN模型的良好预测能力,该模型预测波传播行为286在大坝破洪水中,均方根误差(RMSE)小于0.01,表现优于传统的长期短期内存(LSTM)模型仅达到仅81个时步的可比RMSE。为了显示RC-ESN模型的性能,我们还提供了有关关键参数(包括训练集大小,储层大小和光谱半径)的预测准确性的灵敏度分析。结果表明,RC-ESN较少依赖训练集尺寸,介质储层尺寸k = 1200〜2600就足够了。我们确认光谱半径\ r {ho}对预测准确性显示了复杂的影响,并建议当前较小的光谱半径\ r {ho}。通过更改大坝断裂的初始流程深度,我们还得出了一个结论,即RC-ESN的预测范围大于LSTM的预测范围。
translated by 谷歌翻译
Unsupervised domain adaptation (UDA) for semantic segmentation is a promising task freeing people from heavy annotation work. However, domain discrepancies in low-level image statistics and high-level contexts compromise the segmentation performance over the target domain. A key idea to tackle this problem is to perform both image-level and feature-level adaptation jointly. Unfortunately, there is a lack of such unified approaches for UDA tasks in the existing literature. This paper proposes a novel UDA pipeline for semantic segmentation that unifies image-level and feature-level adaptation. Concretely, for image-level domain shifts, we propose a global photometric alignment module and a global texture alignment module that align images in the source and target domains in terms of image-level properties. For feature-level domain shifts, we perform global manifold alignment by projecting pixel features from both domains onto the feature manifold of the source domain; and we further regularize category centers in the source domain through a category-oriented triplet loss and perform target domain consistency regularization over augmented target domain images. Experimental results demonstrate that our pipeline significantly outperforms previous methods. In the commonly tested GTA5$\rightarrow$Cityscapes task, our proposed method using Deeplab V3+ as the backbone surpasses previous SOTA by 8%, achieving 58.2% in mIoU.
translated by 谷歌翻译
Different people speak with diverse personalized speaking styles. Although existing one-shot talking head methods have made significant progress in lip sync, natural facial expressions, and stable head motions, they still cannot generate diverse speaking styles in the final talking head videos. To tackle this problem, we propose a one-shot style-controllable talking face generation framework. In a nutshell, we aim to attain a speaking style from an arbitrary reference speaking video and then drive the one-shot portrait to speak with the reference speaking style and another piece of audio. Specifically, we first develop a style encoder to extract dynamic facial motion patterns of a style reference video and then encode them into a style code. Afterward, we introduce a style-controllable decoder to synthesize stylized facial animations from the speech content and style code. In order to integrate the reference speaking style into generated videos, we design a style-aware adaptive transformer, which enables the encoded style code to adjust the weights of the feed-forward layers accordingly. Thanks to the style-aware adaptation mechanism, the reference speaking style can be better embedded into synthesized videos during decoding. Extensive experiments demonstrate that our method is capable of generating talking head videos with diverse speaking styles from only one portrait image and an audio clip while achieving authentic visual effects. Project Page: https://github.com/FuxiVirtualHuman/styletalk.
translated by 谷歌翻译
Witnessing the impressive achievements of pre-training techniques on large-scale data in the field of computer vision and natural language processing, we wonder whether this idea could be adapted in a grab-and-go spirit, and mitigate the sample inefficiency problem for visuomotor driving. Given the highly dynamic and variant nature of the input, the visuomotor driving task inherently lacks view and translation invariance, and the visual input contains massive irrelevant information for decision making, resulting in predominant pre-training approaches from general vision less suitable for the autonomous driving task. To this end, we propose PPGeo (Policy Pre-training via Geometric modeling), an intuitive and straightforward fully self-supervised framework curated for the policy pretraining in visuomotor driving. We aim at learning policy representations as a powerful abstraction by modeling 3D geometric scenes on large-scale unlabeled and uncalibrated YouTube driving videos. The proposed PPGeo is performed in two stages to support effective self-supervised training. In the first stage, the geometric modeling framework generates pose and depth predictions simultaneously, with two consecutive frames as input. In the second stage, the visual encoder learns driving policy representation by predicting the future ego-motion and optimizing with the photometric error based on current visual observation only. As such, the pre-trained visual encoder is equipped with rich driving policy related representations and thereby competent for multiple visuomotor driving tasks. Extensive experiments covering a wide span of challenging scenarios have demonstrated the superiority of our proposed approach, where improvements range from 2% to even over 100% with very limited data. Code and models will be available at https://github.com/OpenDriveLab/PPGeo.
translated by 谷歌翻译
Increasing research interests focus on sequential recommender systems, aiming to model dynamic sequence representation precisely. However, the most commonly used loss function in state-of-the-art sequential recommendation models has essential limitations. To name a few, Bayesian Personalized Ranking (BPR) loss suffers the vanishing gradient problem from numerous negative sampling and predictionbiases; Binary Cross-Entropy (BCE) loss subjects to negative sampling numbers, thereby it is likely to ignore valuable negative examples and reduce the training efficiency; Cross-Entropy (CE) loss only focuses on the last timestamp of the training sequence, which causes low utilization of sequence information and results in inferior user sequence representation. To avoid these limitations, in this paper, we propose to calculate Cumulative Cross-Entropy (CCE) loss over the sequence. CCE is simple and direct, which enjoys the virtues of painless deployment, no negative sampling, and effective and efficient training. We conduct extensive experiments on five benchmark datasets to demonstrate the effectiveness and efficiency of CCE. The results show that employing CCE loss on three state-of-the-art models GRU4Rec, SASRec, and S3-Rec can reach 125.63%, 69.90%, and 33.24% average improvement of full ranking NDCG@5, respectively. Using CCE, the performance curve of the models on the test data increases rapidly with the wall clock time, and is superior to that of other loss functions in almost the whole process of model training.
translated by 谷歌翻译
Recent advances in self-supervised learning (SSL) in computer vision are primarily comparative, whose goal is to preserve invariant and discriminative semantics in latent representations by comparing siamese image views. However, the preserved high-level semantics do not contain enough local information, which is vital in medical image analysis (e.g., image-based diagnosis and tumor segmentation). To mitigate the locality problem of comparative SSL, we propose to incorporate the task of pixel restoration for explicitly encoding more pixel-level information into high-level semantics. We also address the preservation of scale information, a powerful tool in aiding image understanding but has not drawn much attention in SSL. The resulting framework can be formulated as a multi-task optimization problem on the feature pyramid. Specifically, we conduct multi-scale pixel restoration and siamese feature comparison in the pyramid. In addition, we propose non-skip U-Net to build the feature pyramid and develop sub-crop to replace multi-crop in 3D medical imaging. The proposed unified SSL framework (PCRLv2) surpasses its self-supervised counterparts on various tasks, including brain tumor segmentation (BraTS 2018), chest pathology identification (ChestX-ray, CheXpert), pulmonary nodule detection (LUNA), and abdominal organ segmentation (LiTS), sometimes outperforming them by large margins with limited annotations.
translated by 谷歌翻译
The visual dimension of cities has been a fundamental subject in urban studies, since the pioneering work of scholars such as Sitte, Lynch, Arnheim, and Jacobs. Several decades later, big data and artificial intelligence (AI) are revolutionizing how people move, sense, and interact with cities. This paper reviews the literature on the appearance and function of cities to illustrate how visual information has been used to understand them. A conceptual framework, Urban Visual Intelligence, is introduced to systematically elaborate on how new image data sources and AI techniques are reshaping the way researchers perceive and measure cities, enabling the study of the physical environment and its interactions with socioeconomic environments at various scales. The paper argues that these new approaches enable researchers to revisit the classic urban theories and themes, and potentially help cities create environments that are more in line with human behaviors and aspirations in the digital age.
translated by 谷歌翻译