歧管假设是深度学习成功背后的核心机制,因此了解图像数据的内在流形结构对于研究神经网络如何从数据中学习至关重要。固有的数据集歧管及其与学习难度的关系最近开始研究自然图像的共同领域,但是几乎没有尝试进行放射学图像的研究。我们在这里解决这个问题。首先,我们比较放射学和自然图像的固有歧管维度。我们还研究了固有维度和泛化能力之间的关系。我们的分析表明,自然图像数据集通常比放射学图像具有更高数量的固有维度。但是,对医学图像的概括能力与内在维度之间的关系更加牢固,这可以解释为具有固有特征的放射学图像更难学习。这些结果为直觉提供了更具原则性的基础,即放射学图像要比机器学习研究所共有的自然图像数据集更具挑战性。我们认为,与其直接将为自然图像开发的模型直接应用于放射成像领域,不应对开发更适合该域的特定特征定制的体系结构和算法进行更多的注意。我们的论文中显示的研究表明了这些特征以及与自然图像的差异,是该方向上的重要第一步。
translated by 谷歌翻译
膝关节X射线上的膝盖骨关节炎(KOA)的评估是使用总膝关节置换术的中心标准。但是,该评估遭受了不精确的标准,并且读取器间的可变性非常高。对KOA严重性的算法,自动评估可以通过提高其使用的适当性来改善膝盖替代程序的总体结果。我们提出了一种基于深度学习的新型五步算法,以自动从X光片后验(PA)视图对KOA进行评级:(1)图像预处理(2)使用Yolo V3-tiny模型,图像在图像中定位膝关节, (3)使用基于卷积神经网络的分类器对骨关节炎的严重程度进行初步评估,(4)关节分割和关节空间狭窄(JSN)的计算(JSN)和(5),JSN和最初的结合评估确定最终的凯尔格伦法律(KL)得分。此外,通过显示用于进行评估的分割面具,我们的算法与典型的“黑匣子”深度学习分类器相比表现出更高的透明度。我们使用我们机构的两个公共数据集和一个数据集进行了全面的评估,并表明我们的算法达到了最先进的性能。此外,我们还从机构中的多个放射科医生那里收集了评分,并表明我们的算法在放射科医生级别进行。该软件已在https://github.com/maciejmazurowowski/osteoarthitis-classification上公开提供。
translated by 谷歌翻译
我们对标准物体检测模型的特征金字塔网络进行了改进。我们通过本地图像翻译和密切地调用我们的方法增强功能金字塔网络。副本通过同时(1)产生逼真但具有模拟对象的假图像来提高对象检测性能,以减轻注意力机制的数据饥饿问题,并通过新颖的对图像特征贴片进行注意力推进检测模型架构。具体地,我们使用卷积AutomEncoder作为生成器来通过本地插值将对象注入图像来创建新图像,并在隐藏层中提取的功能重建。然后由于模拟图像数量较多,我们使用可视变压器来增强每个Reset层的输出,该层用作特征金字塔网络的输入。我们将方法应用于检测数字乳房断层合成扫描(DBT)中的病变,高分辨率医学成像模塑在乳腺癌筛查中至关重要。我们在定性和定量上展示复制品可以通过实验结果利用增强的标准物体检测框架提高肿瘤检测的准确性。
translated by 谷歌翻译
Large training data and expensive model tweaking are standard features of deep learning for images. As a result, data owners often utilize cloud resources to develop large-scale complex models, which raises privacy concerns. Existing solutions are either too expensive to be practical or do not sufficiently protect the confidentiality of data and models. In this paper, we study and compare novel \emph{image disguising} mechanisms, DisguisedNets and InstaHide, aiming to achieve a better trade-off among the level of protection for outsourced DNN model training, the expenses, and the utility of data. DisguisedNets are novel combinations of image blocktization, block-level random permutation, and two block-level secure transformations: random multidimensional projection (RMT) and AES pixel-level encryption (AES). InstaHide is an image mixup and random pixel flipping technique \cite{huang20}. We have analyzed and evaluated them under a multi-level threat model. RMT provides a better security guarantee than InstaHide, under the Level-1 adversarial knowledge with well-preserved model quality. In contrast, AES provides a security guarantee under the Level-2 adversarial knowledge, but it may affect model quality more. The unique features of image disguising also help us to protect models from model-targeted attacks. We have done an extensive experimental evaluation to understand how these methods work in different settings for different datasets.
translated by 谷歌翻译
A storyboard is a roadmap for video creation which consists of shot-by-shot images to visualize key plots in a text synopsis. Creating video storyboards however remains challenging which not only requires association between high-level texts and images, but also demands for long-term reasoning to make transitions smooth across shots. In this paper, we propose a new task called Text synopsis to Video Storyboard (TeViS) which aims to retrieve an ordered sequence of images to visualize the text synopsis. We construct a MovieNet-TeViS benchmark based on the public MovieNet dataset. It contains 10K text synopses each paired with keyframes that are manually selected from corresponding movies by considering both relevance and cinematic coherence. We also present an encoder-decoder baseline for the task. The model uses a pretrained vision-and-language model to improve high-level text-image matching. To improve coherence in long-term shots, we further propose to pre-train the decoder on large-scale movie frames without text. Experimental results demonstrate that our proposed model significantly outperforms other models to create text-relevant and coherent storyboards. Nevertheless, there is still a large gap compared to human performance suggesting room for promising future work.
translated by 谷歌翻译
Solving real-world optimal control problems are challenging tasks, as the system dynamics can be highly non-linear or including nonconvex objectives and constraints, while in some cases the dynamics are unknown, making it hard to numerically solve the optimal control actions. To deal with such modeling and computation challenges, in this paper, we integrate Neural Networks with the Pontryagin's Minimum Principle (PMP), and propose a computationally efficient framework NN-PMP. The resulting controller can be implemented for systems with unknown and complex dynamics. It can not only utilize the accurate surrogate models parameterized by neural networks, but also efficiently recover the optimality conditions along with the optimal action sequences via PMP conditions. A toy example on a nonlinear Martian Base operation along with a real-world lossy energy storage arbitrage example demonstrates our proposed NN-PMP is a general and versatile computation tool for finding optimal solutions. Compared with solutions provided by the numerical optimization solver with approximated linear dynamics, NN-PMP achieves more efficient system modeling and higher performance in terms of control objectives.
translated by 谷歌翻译
The task of reconstructing 3D human motion has wideranging applications. The gold standard Motion capture (MoCap) systems are accurate but inaccessible to the general public due to their cost, hardware and space constraints. In contrast, monocular human mesh recovery (HMR) methods are much more accessible than MoCap as they take single-view videos as inputs. Replacing the multi-view Mo- Cap systems with a monocular HMR method would break the current barriers to collecting accurate 3D motion thus making exciting applications like motion analysis and motiondriven animation accessible to the general public. However, performance of existing HMR methods degrade when the video contains challenging and dynamic motion that is not in existing MoCap datasets used for training. This reduces its appeal as dynamic motion is frequently the target in 3D motion recovery in the aforementioned applications. Our study aims to bridge the gap between monocular HMR and multi-view MoCap systems by leveraging information shared across multiple video instances of the same action. We introduce the Neural Motion (NeMo) field. It is optimized to represent the underlying 3D motions across a set of videos of the same action. Empirically, we show that NeMo can recover 3D motion in sports using videos from the Penn Action dataset, where NeMo outperforms existing HMR methods in terms of 2D keypoint detection. To further validate NeMo using 3D metrics, we collected a small MoCap dataset mimicking actions in Penn Action,and show that NeMo achieves better 3D reconstruction compared to various baselines.
translated by 谷歌翻译
A major goal of multimodal research is to improve machine understanding of images and text. Tasks include image captioning, text-to-image generation, and vision-language representation learning. So far, research has focused on the relationships between images and text. For example, captioning models attempt to understand the semantics of images which are then transformed into text. An important question is: which annotation reflects best a deep understanding of image content? Similarly, given a text, what is the best image that can present the semantics of the text? In this work, we argue that the best text or caption for a given image is the text which would generate the image which is the most similar to that image. Likewise, the best image for a given text is the image that results in the caption which is best aligned with the original text. To this end, we propose a unified framework that includes both a text-to-image generative model and an image-to-text generative model. Extensive experiments validate our approach.
translated by 谷歌翻译
Model-based attacks can infer training data information from deep neural network models. These attacks heavily depend on the attacker's knowledge of the application domain, e.g., using it to determine the auxiliary data for model-inversion attacks. However, attackers may not know what the model is used for in practice. We propose a generative adversarial network (GAN) based method to explore likely or similar domains of a target model -- the model domain inference (MDI) attack. For a given target (classification) model, we assume that the attacker knows nothing but the input and output formats and can use the model to derive the prediction for any input in the desired form. Our basic idea is to use the target model to affect a GAN training process for a candidate domain's dataset that is easy to obtain. We find that the target model may distract the training procedure less if the domain is more similar to the target domain. We then measure the distraction level with the distance between GAN-generated datasets, which can be used to rank candidate domains for the target model. Our experiments show that the auxiliary dataset from an MDI top-ranked domain can effectively boost the result of model-inversion attacks.
translated by 谷歌翻译
To reproduce the success of text-to-image (T2I) generation, recent works in text-to-video (T2V) generation employ large-scale text-video dataset for fine-tuning. However, such paradigm is computationally expensive. Humans have the amazing ability to learn new visual concepts from just one single exemplar. We hereby study a new T2V generation problem$\unicode{x2014}$One-Shot Video Generation, where only a single text-video pair is presented for training an open-domain T2V generator. Intuitively, we propose to adapt the T2I diffusion model pretrained on massive image data for T2V generation. We make two key observations: 1) T2I models are able to generate images that align well with the verb terms; 2) extending T2I models to generate multiple images concurrently exhibits surprisingly good content consistency. To further learn continuous motion, we propose Tune-A-Video with a tailored Sparse-Causal Attention, which generates videos from text prompts via an efficient one-shot tuning of pretrained T2I diffusion models. Tune-A-Video is capable of producing temporally-coherent videos over various applications such as change of subject or background, attribute editing, style transfer, demonstrating the versatility and effectiveness of our method.
translated by 谷歌翻译