There has been a great deal of recent interest in learning and approximation of functions that can be expressed as expectations of a given nonlinearity with respect to its random internal parameters. Examples of such representations include "infinitely wide" neural nets, where the underlying nonlinearity is given by the activation function of an individual neuron. In this paper, we bring this perspective to function representation by neural stochastic differential equations (SDEs). A neural SDE is an It\^o diffusion process whose drift and diffusion matrix are elements of some parametric families. We show that the ability of a neural SDE to realize nonlinear functions of its initial condition can be related to the problem of optimally steering a certain deterministic dynamical system between two given points in finite time. This auxiliary system is obtained by formally replacing the Brownian motion in the SDE by a deterministic control input. We derive upper and lower bounds on the minimum control effort needed to accomplish this steering; these bounds may be of independent interest in the context of motion planning and deterministic optimal control.
translated by 谷歌翻译
本文介绍了一种拟合有限维欧几里得空间的沉浸式亚策略的方法。从环境空间到所需的子策略的重建映射是作为编码器的组成而实现的固定初始点。编码器为流量提供时间。编码器二进制图是通过经验风险最小化获得的,并且相对于给定的Encoder-Decoder映射的最小预期重建误差,多余的风险给出了高概率。拟议的方法是对苏斯曼的轨道定理的基本使用,该定理保证了重建图的图像确实包含在沉浸式的子手机中。
translated by 谷歌翻译
通过定义和上限,通过定义和上限,分析了贝叶斯学习的最佳成绩性能,通过限定了最小的过度风险(MER):通过从数据学习和最低预期损失可以实现的最低预期损失之间的差距认识到了。 MER的定义提供了一种原则状的方式来定义贝叶斯学习中的不同概念的不确定性,包括炼膜不确定性和最小的认知不确定性。提出了用于衍生MER的上限的两种方法。第一方法,通常适用于具有参数生成模型的贝叶斯学习,通过在模型参数之间的条件互信息和所观察到的数据预测的量之间的条件相互信息。它允许我们量化MER衰减随着更多数据可用而衰减为零的速率。在可实现的模型中,该方法还将MER与生成函数类的丰富性涉及,特别是二进制分类中的VC维度。具有参数预测模型的第二种方法,特别适用于贝叶斯学习,将MER与来自数据的模型参数的最小估计误差相关联。它明确地说明了模型参数估计中的不确定性如何转化为MER和最终预测不确定性。我们还将MER的定义和分析扩展到具有多个模型系列的设置以及使用非参数模型的设置。沿着讨论,我们在贝叶斯学习中的MER与频繁学习的过度风险之间建立了一些比较。
translated by 谷歌翻译
我们考虑以下学习问题:给定由未知非线性系统生成的输入和输出信号对(哪个未被假定是因果或时间不变),我们希望找到具有双曲线切线激活的连续反复性神经网络函数大致再现底层的I / O行为,高度置信度。利用较早的工作与匹配的输出衍生品达到给定的有限顺序,我们以熟悉的系统理论语言重构学习问题,并导出了在学习模型的超标范围内的超标范围的定量保证,样本大小,匹配的衍生数的数量,以及输入,输出和未知I / O映射的规律性属性。
translated by 谷歌翻译
We derive upper bounds on the generalization error of a learning algorithm in terms of the mutual information between its input and output. The bounds provide an information-theoretic understanding of generalization in learning problems, and give theoretical guidelines for striking the right balance between data fit and generalization by controlling the input-output mutual information. We propose a number of methods for this purpose, among which are algorithms that regularize the ERM algorithm with relative entropy or with random noise. Our work extends and leads to nontrivial improvements on the recent results of Russo and Zou.
translated by 谷歌翻译
We investigate a model for image/video quality assessment based on building a set of codevectors representing in a sense some basic properties of images, similar to well-known CORNIA model. We analyze the codebook building method and propose some modifications for it. Also the algorithm is investigated from the point of inference time reduction. Both natural and synthetic images are used for building codebooks and some analysis of synthetic images used for codebooks is provided. It is demonstrated the results on quality assessment may be improves with the use if synthetic images for codebook construction. We also demonstrate regimes of the algorithm in which real time execution on CPU is possible for sufficiently high correlations with mean opinion score (MOS). Various pooling strategies are considered as well as the problem of metric sensitivity to bitrate.
translated by 谷歌翻译
The body of research on classification of solar panel arrays from aerial imagery is increasing, yet there are still not many public benchmark datasets. This paper introduces two novel benchmark datasets for classifying and localizing solar panel arrays in Denmark: A human annotated dataset for classification and segmentation, as well as a classification dataset acquired using self-reported data from the Danish national building registry. We explore the performance of prior works on the new benchmark dataset, and present results after fine-tuning models using a similar approach as recent works. Furthermore, we train models of newer architectures and provide benchmark baselines to our datasets in several scenarios. We believe the release of these datasets may improve future research in both local and global geospatial domains for identifying and mapping of solar panel arrays from aerial imagery. The data is accessible at https://osf.io/aj539/.
translated by 谷歌翻译
Powerful hardware services and software libraries are vital tools for quickly and affordably designing, testing, and executing quantum algorithms. A robust large-scale study of how the performance of these platforms scales with the number of qubits is key to providing quantum solutions to challenging industry problems. Such an evaluation is difficult owing to the availability and price of physical quantum processing units. This work benchmarks the runtime and accuracy for a representative sample of specialized high-performance simulated and physical quantum processing units. Results show the QMware cloud computing service can reduce the runtime for executing a quantum circuit by up to 78% compared to the next fastest option for algorithms with fewer than 27 qubits. The AWS SV1 simulator offers a runtime advantage for larger circuits, up to the maximum 34 qubits available with SV1. Beyond this limit, QMware provides the ability to execute circuits as large as 40 qubits. Physical quantum devices, such as Rigetti's Aspen-M2, can provide an exponential runtime advantage for circuits with more than 30. However, the high financial cost of physical quantum processing units presents a serious barrier to practical use. Moreover, of the four quantum devices tested, only IonQ's Harmony achieves high fidelity with more than four qubits. This study paves the way to understanding the optimal combination of available software and hardware for executing practical quantum algorithms.
translated by 谷歌翻译
冠心病(CHD)是现代世界中死亡的主要原因。用于诊断和治疗CHD的现代分析工具的开发正在从科学界受到极大的关注。基于深度学习的算法,例如分割网络和检测器,通过及时分析患者的血管造影来协助医疗专业人员,在协助医疗专业人员方面发挥着重要作用。本文着重于X射线冠状动脉造影(XCA),该血管造影被认为是CHD诊断和治疗中的“黄金标准”。首先,我们描述了XCA图像的公开可用数据集。然后,审查了图像预处理的经典和现代技术。此外,讨论了共同的框架选择技术,这是输入质量以及模型性能的重要因素。在以下两章中,我们讨论了现代血管分割和狭窄检测网络,最后是当前最新技术的开放问题和当前局限性。
translated by 谷歌翻译
在现实世界条件下运行的原因是由于部分可观察性引起的广泛故障而具有挑战性。在相对良性的环境中,可以通过重试或执行少量手工恢复策略之一来克服这种失败。相比之下,诸如打开门和组装家具之类的接触式连续操作任务不适合详尽的手工设计。为了解决这个问题,我们提出了一种以样本效率的方式来鲁棒化操作策略的一般方法。我们的方法通过在模拟中探索发现当前策略的故障模式,从而提高了鲁棒性,然后学习其他恢复技能来处理这些失败。为了确保有效的学习,我们提出了一种在线算法值上限限制(值UCL),该算法选择要优先级的故障模式以及要恢复到哪种状态,以使预期的性能在每个培训情节中最大程度地提高。我们使用我们的方法来学习开门的恢复技能,并在模拟和实际机器人中对其进行评估。与开环执行相比,我们的实验表明,即使是有限的恢复学习也可以从模拟中的71 \%提高到92.4 \%,从75 \%到90 \%的实际机器人。
translated by 谷歌翻译