在多视图3D对象检测任务中,重叠图像区域的差异监督显着改善了整体检测性能。但是,当前的多视图3D对象检测方法通常无法正确检测重叠区域中的对象,并且网络对场景的理解通常仅限于单眼检测网络。为了减轻此问题,我们主张应用传统的立体声差异估计方法,以获取重叠区域的可靠差异信息。鉴于差异估计为监督,我们建议将网络正规化以充分利用双眼图像的几何潜力,并提高整体检测准确性。此外,我们建议使用对抗重叠区域的歧视器,该区域的训练以最大程度地减少非重叠区域和重叠区域之间的代表性差距,在这些区域中通常会因摄像机失真而在很大程度上被遮挡或因变形而遭受变形,从而导致域移动,从而导致域移动。我们用大规模的多视图3D对象检测基准(称为Nuscenes)证明了所提出的方法的有效性。我们的实验表明,我们提出的方法的表现优于当前最新方法。
translated by 谷歌翻译
最近的生成模型的成功表明,利用多模态嵌入空间可以使用文本信息操纵图像。然而,由于源的动态特性,使用其他来源而不是声音的文本来操纵图像,而不是声音,并不容易。特别是,声音可以传达真实世界的生动情感和动态表达。在这里,我们提出了一个框架,该框架将声音直接编码为多模态(图像文本)嵌入空间,并从空间操纵图像。我们的音频编码器受过培训以产生来自音频输入的潜在表示,该音频输入被强制与多模式嵌入空间中的图像和文本表示对齐。我们使用基于对齐的嵌入式的直接潜在优化方法进行声音引导图像操纵。我们还表明,我们的方法可以混合文本和音频模态,这丰富了各种图像修改。我们验证了定量和定性的声音引导图像操纵的有效性。我们还表明,我们的方法可以混合不同的模态,即文本和音频,这丰富了图像修改的各种。零射频分类和语义级图像分类的实验表明,我们所提出的模型优于其他文本和声音引导最先进的方法。
translated by 谷歌翻译
Image captioning is one of the straightforward tasks that can take advantage of large-scale web-crawled data which provides rich knowledge about the visual world for a captioning model. However, since web-crawled data contains image-text pairs that are aligned at different levels, the inherent noises (e.g., misaligned pairs) make it difficult to learn a precise captioning model. While the filtering strategy can effectively remove noisy data, however, it leads to a decrease in learnable knowledge and sometimes brings about a new problem of data deficiency. To take the best of both worlds, we propose a noise-aware learning framework, which learns rich knowledge from the whole web-crawled data while being less affected by the noises. This is achieved by the proposed quality controllable model, which is learned using alignment levels of the image-text pairs as an additional control signal during training. The alignment-conditioned training allows the model to generate high-quality captions of well-aligned by simply setting the control signal to desired alignment level at inference time. Through in-depth analysis, we show that our controllable captioning model is effective in handling noise. In addition, with two tasks of zero-shot captioning and text-to-image retrieval using generated captions (i.e., self-retrieval), we also demonstrate our model can produce high-quality captions in terms of descriptiveness and distinctiveness. Code is available at \url{https://github.com/kakaobrain/noc}.
translated by 谷歌翻译
We tackle open-world semantic segmentation, which aims at learning to segment arbitrary visual concepts in images, by using only image-text pairs without dense annotations. Existing open-world segmentation methods have shown impressive advances by employing contrastive learning (CL) to learn diverse visual concepts and adapting the learned image-level understanding to the segmentation task. However, these methods based on CL have a discrepancy since it only considers image-text level alignment in training time, while the segmentation task requires region-text level alignment at test time. In this paper, we propose a novel Text-grounded Contrastive Learning (TCL) framework to directly align a text and a region described by the text to address the train-test discrepancy. Our method generates a segmentation mask associated with a given text, extracts grounded image embedding from the masked region, and aligns it with text embedding via TCL. The framework addresses the discrepancy by letting the model learn region-text level alignment instead of image-text level alignment and encourages the model to directly improve the quality of generated segmentation masks. In addition, for a rigorous and fair comparison, we present a unified evaluation protocol with widely used 8 semantic segmentation datasets. TCL achieves state-of-the-art zero-shot segmentation performance with large margins in all datasets. Code is available at https://github.com/kakaobrain/tcl.
translated by 谷歌翻译
Over the past few years, the field of adversarial attack received numerous attention from various researchers with the help of successful attack success rate against well-known deep neural networks that were acknowledged to achieve high classification ability in various tasks. However, majority of the experiments were completed under a single model, which we believe it may not be an ideal case in a real-life situation. In this paper, we introduce a novel federated adversarial training method for smart home face recognition, named FLATS, where we observed some interesting findings that may not be easily noticed in a traditional adversarial attack to federated learning experiments. By applying different variations to the hyperparameters, we have spotted that our method can make the global model to be robust given a starving federated environment. Our code can be found on https://github.com/jcroh0508/FLATS.
translated by 谷歌翻译
Pre-trained language models allowed us to process downstream tasks with the help of fine-tuning, which aids the model to achieve fairly high accuracy in various Natural Language Processing (NLP) tasks. Such easily-downloaded language models from various websites empowered the public users as well as some major institutions to give a momentum to their real-life application. However, it was recently proven that models become extremely vulnerable when they are backdoor attacked with trigger-inserted poisoned datasets by malicious users. The attackers then redistribute the victim models to the public to attract other users to use them, where the models tend to misclassify when certain triggers are detected within the training sample. In this paper, we will introduce a novel improved textual backdoor defense method, named MSDT, that outperforms the current existing defensive algorithms in specific datasets. The experimental results illustrate that our method can be effective and constructive in terms of defending against backdoor attack in text domain. Code is available at https://github.com/jcroh0508/MSDT.
translated by 谷歌翻译
We consider local kernel metric learning for off-policy evaluation (OPE) of deterministic policies in contextual bandits with continuous action spaces. Our work is motivated by practical scenarios where the target policy needs to be deterministic due to domain requirements, such as prescription of treatment dosage and duration in medicine. Although importance sampling (IS) provides a basic principle for OPE, it is ill-posed for the deterministic target policy with continuous actions. Our main idea is to relax the target policy and pose the problem as kernel-based estimation, where we learn the kernel metric in order to minimize the overall mean squared error (MSE). We present an analytic solution for the optimal metric, based on the analysis of bias and variance. Whereas prior work has been limited to scalar action spaces or kernel bandwidth selection, our work takes a step further being capable of vector action spaces and metric optimization. We show that our estimator is consistent, and significantly reduces the MSE compared to baseline OPE methods through experiments on various domains.
translated by 谷歌翻译
随着深度学习(DL)的引入,常用心电图(ECG)诊断模型的性能改善。但是,尚未充分研究多个DL组件的各种组合和/或数据增强技术对诊断的作用的影响。这项研究提出了一种基于集合的多视图学习方法,采用ECG增强技术,比传统的12级ECG诊断方法获得更高的性能。数据分析结果表明,所提出的模型报告的F1得分为0.840,这表现优于文献中现有的最新方法。
translated by 谷歌翻译
关于组合优化的机器学习的最新作品表明,基于学习的方法可以优于速度和性能方面的启发式方法。在本文中,我们考虑了在定向的无环图上找到最佳拓扑顺序的问题,重点是编译器中出现的记忆最小化问题。我们提出了一种基于端到端的机器学习方法,用于使用编码器框架,用于拓扑排序。我们的编码器是一种基于注意力的新图形神经网络体系结构,称为\ emph {topoformer},它使用DAG的不同拓扑转换来传递消息。由编码器产生的节点嵌入被转换为节点优先级,解码器使用这些嵌入,以生成概率分布对拓扑顺序。我们在称为分层图的合成生成图的数据集上训练我们的模型。我们表明,我们的模型的表现优于或在PAR上,具有多个拓扑排序基线,同时在最多2K节点的合成图上明显更快。我们还在一组现实世界计算图上训练和测试我们的模型,显示了性能的改进。
translated by 谷歌翻译
深度学习的最新进展极大地改变了机器学习,尤其是在自然语言处理领域的方式,可以应用于法律领域。但是,这种转移到数据驱动的方法需要更大,更多样化的数据集,但数量仍然很小,尤其是在非英语语言中。在这里,我们介绍了韩国法律AI数据集的第一个大规模基准,即Lox Open,由一个法律语料库组成,两个分类任务,两个法律判决预测(LJP)任务和一项摘要任务。法律语料库由15万韩国的先例(26400万代币)组成,其中63K在过去4年中被判刑,而96,000次是从第一和第二级法院进行审查,其中审查了事实问题。这两个分类任务是案例名称(10K)和法规(3K)从个人案例的事实描述中的预测。 LJP任务由(1)11K犯罪示例组成,要求该模型预测罚款,对劳动的监禁以及没有劳动范围的监禁,以及(2)5K民事示例,其中投入是事实和要求救济和产出是索赔接受程度。摘要任务包括最高法院的先例和相应的摘要。我们还发布了LCUBE,这是该研究中首个对法律语料库进行培训的韩国法律语言模型。鉴于韩国法律的独特性以及这项工作中涵盖的法律任务的多样性,我们认为Lox Open有助于全球法律研究的多语言。 Lox Open和LCUBE将公开使用。
translated by 谷歌翻译