本文介绍了一个多模式的室内轨道图数据集,Odombeyondvision,具有不同频谱的多个传感器,并使用不同的移动平台收集。Odombeyondvision不仅包含传统的导航传感器,例如IMUS,机械激光镜,RGBD摄像头,还包括几个新兴传感器,例如单芯片MMWave Radar,LWIR热相机和固态激光雷达。在无人机,UGV和手持式平台上的上述传感器中,我们分别记录了各种室内场景和不同照明条件的多模式探光数据及其运动轨迹。我们释放了示例雷达,雷达惯性和热惯性循环仪的实现,以证明其未来工作的结果,以对其进行比较和改进。包括工具包和文档在内的完整数据集可公开可用:https://github.com/maps-lab/odombeyondvision。
translated by 谷歌翻译
对不利环境中的行人无处不在的定位服务了很长的挑战。尽管深入学习的戏剧性进展,但多传感器深度测量系统却带来了高计算成本并随着时间的推移遭受累积漂移的错误。由于边缘设备的计算能力越来越多,我们通过在边缘与EKF(扩展卡尔曼滤波器) - 欧拉后端集成了最新的深径测量模型,提出了一种新的无处不在的定位解决方案。我们仔细比较并选择三个传感器模式,即惯性测量单元(IMU),毫米波(MMWAVE)雷达和热红外摄像机,并实现实时运行的深度内径推理引擎。提出了考虑精度,复杂性和边缘平台的深度径流的管道。我们设计一个Lora链接,用于定位数据回程,并将深度内径仪的聚合位置投影到全局框架中。我们发现简单的基于EKF的融合模块足以用于通用定位校准,具有超过34%的精度增长,针对任何独立的深径测量系统。不同环境的广泛测试验证了我们所提出的定位系统的效率和功效。
translated by 谷歌翻译
基于RF信号的方向查找和定位系统因多径传播而受到显着影响,特别是在室内环境中。现有算法(例如音乐)在多径存在的情况下解决到达角度(AOA)或在弱信号方案中操作时表现不佳。我们注意到数字采样的RF前端允许轻松分析信号和延迟组件。低成本软件定义的无线电(SDR)模块使能跨宽频谱的通道状态信息(CSI)提取,激励增强的到达角度(AOA)解决方案的设计。我们提出了一种深入的学习方法,可以从SDR多通道数据的单一快照派生AOA。我们比较和对比基于深度学习的角度分类和回归模型,准确地估计最多两个AOA。我们已经在不同平台上实施了推理引擎,实时提取了AOA,展示了我们方法的计算途径。为了证明我们的方法的效用,我们在各种视角(LOS)和非线视线中收集了来自四元通用线性阵列(ULA)的IQ(同步和正交组件)样本( NLOS)环境,并发布了数据集。我们所提出的方法在确定撞击信号的数量并实现平均值为2 ^ {\ rIC} $ 2 ^ {\ cird} $时,我们提出的方法展示了出色的可靠性。
translated by 谷歌翻译
When using LiDAR semantic segmentation models for safety-critical applications such as autonomous driving, it is essential to understand and improve their robustness with respect to a large range of LiDAR corruptions. In this paper, we aim to comprehensively analyze the robustness of LiDAR semantic segmentation models under various corruptions. To rigorously evaluate the robustness and generalizability of current approaches, we propose a new benchmark called SemanticKITTI-C, which features 16 out-of-domain LiDAR corruptions in three groups, namely adverse weather, measurement noise and cross-device discrepancy. Then, we systematically investigate 11 LiDAR semantic segmentation models, especially spanning different input representations (e.g., point clouds, voxels, projected images, and etc.), network architectures and training schemes. Through this study, we obtain two insights: 1) We find out that the input representation plays a crucial role in robustness. Specifically, under specific corruptions, different representations perform variously. 2) Although state-of-the-art methods on LiDAR semantic segmentation achieve promising results on clean data, they are less robust when dealing with noisy data. Finally, based on the above observations, we design a robust LiDAR segmentation model (RLSeg) which greatly boosts the robustness with simple but effective modifications. It is promising that our benchmark, comprehensive analysis, and observations can boost future research in robust LiDAR semantic segmentation for safety-critical applications.
translated by 谷歌翻译
In recent years, arbitrary image style transfer has attracted more and more attention. Given a pair of content and style images, a stylized one is hoped that retains the content from the former while catching style patterns from the latter. However, it is difficult to simultaneously keep well the trade-off between the content details and the style features. To stylize the image with sufficient style patterns, the content details may be damaged and sometimes the objects of images can not be distinguished clearly. For this reason, we present a new transformer-based method named STT for image style transfer and an edge loss which can enhance the content details apparently to avoid generating blurred results for excessive rendering on style features. Qualitative and quantitative experiments demonstrate that STT achieves comparable performance to state-of-the-art image style transfer methods while alleviating the content leak problem.
translated by 谷歌翻译
With the increasing ability of large language models (LLMs), in-context learning (ICL) has become a new paradigm for natural language processing (NLP), where LLMs make predictions only based on contexts augmented with a few training examples. It has been a new trend exploring ICL to evaluate and extrapolate the ability of LLMs. In this paper, we aim to survey and summarize the progress, challenges, and future work in ICL. We first present a formal definition of ICL and clarify its correlation to related studies. Then, we organize and discuss advanced techniques of ICL, including training strategies, prompting strategies, and so on. Finally, we present the challenges of ICL and provide potential directions for further research. We hope our work can encourage more research on uncovering how ICL works and improving ICL in future work.
translated by 谷歌翻译
Gaze estimation is the fundamental basis for many visual tasks. Yet, the high cost of acquiring gaze datasets with 3D annotations hinders the optimization and application of gaze estimation models. In this work, we propose a novel Head-Eye redirection parametric model based on Neural Radiance Field, which allows dense gaze data generation with view consistency and accurate gaze direction. Moreover, our head-eye redirection parametric model can decouple the face and eyes for separate neural rendering, so it can achieve the purpose of separately controlling the attributes of the face, identity, illumination, and eye gaze direction. Thus diverse 3D-aware gaze datasets could be obtained by manipulating the latent code belonging to different face attributions in an unsupervised manner. Extensive experiments on several benchmarks demonstrate the effectiveness of our method in domain generalization and domain adaptation for gaze estimation tasks.
translated by 谷歌翻译
Generalizability to unseen forgery types is crucial for face forgery detectors. Recent works have made significant progress in terms of generalization by synthetic forgery data augmentation. In this work, we explore another path for improving the generalization. Our goal is to reduce the features that are easy to learn in the training phase, so as to reduce the risk of overfitting on specific forgery types. Specifically, in our method, a teacher network takes as input the face images and generates an attention map of the deep features by a diverse multihead attention ViT. The attention map is used to guide a student network to focus on the low-attended features by reducing the highly-attended deep features. A deep feature mixup strategy is also proposed to synthesize forgeries in the feature domain. Experiments demonstrate that, without data augmentation, our method is able to achieve promising performances on unseen forgeries and highly compressed data.
translated by 谷歌翻译
The development of deep learning models in medical image analysis is majorly limited by the lack of large-sized and well-annotated datasets. Unsupervised learning does not require labels and is more suitable for solving medical image analysis problems. However, most of the current unsupervised learning methods need to be applied to large datasets. To make unsupervised learning applicable to small datasets, we proposed Swin MAE, which is a masked autoencoder with Swin Transformer as its backbone. Even on a dataset of only a few thousand medical images and without using any pre-trained models, Swin MAE is still able to learn useful semantic features purely from images. It can equal or even slightly outperform the supervised model obtained by Swin Transformer trained on ImageNet in terms of the transfer learning results of downstream tasks. The code will be publicly available soon.
translated by 谷歌翻译
Remote sensing of the Earth's surface water is critical in a wide range of environmental studies, from evaluating the societal impacts of seasonal droughts and floods to the large-scale implications of climate change. Consequently, a large literature exists on the classification of water from satellite imagery. Yet, previous methods have been limited by 1) the spatial resolution of public satellite imagery, 2) classification schemes that operate at the pixel level, and 3) the need for multiple spectral bands. We advance the state-of-the-art by 1) using commercial imagery with panchromatic and multispectral resolutions of 30 cm and 1.2 m, respectively, 2) developing multiple fully convolutional neural networks (FCN) that can learn the morphological features of water bodies in addition to their spectral properties, and 3) FCN that can classify water even from panchromatic imagery. This study focuses on rivers in the Arctic, using images from the Quickbird, WorldView, and GeoEye satellites. Because no training data are available at such high resolutions, we construct those manually. First, we use the RGB, and NIR bands of the 8-band multispectral sensors. Those trained models all achieve excellent precision and recall over 90% on validation data, aided by on-the-fly preprocessing of the training data specific to satellite imagery. In a novel approach, we then use results from the multispectral model to generate training data for FCN that only require panchromatic imagery, of which considerably more is available. Despite the smaller feature space, these models still achieve a precision and recall of over 85%. We provide our open-source codes and trained model parameters to the remote sensing community, which paves the way to a wide range of environmental hydrology applications at vastly superior accuracies and 2 orders of magnitude higher spatial resolution than previously possible.
translated by 谷歌翻译