众所周知,HEBB的学习探索了帕夫洛夫的古典条件,而前者在过去几十年中进行了广泛的建模(例如,通过Hopfield模型和无数的主题变化),因为后者的建模在很大程度上保持了很大的含糊状态。远的;此外,完全缺乏这两个支柱之间的桥梁。实现该目标的主要困难置于所涉及的信息的本质上不同的范围:帕夫洛夫的理论是关于\ emph {concepts}之间的相关性(动态地)存储在突触矩阵中,这是由狗和一个戒指主演的著名实验所体现的钟;相反,HEBB的理论是关于相邻神经元对之间的相关性,如著名的陈述{\ em神经元一起发射汇合的}所总结。在本文中,我们依靠随机过程理论以及通过langevin方程进行神经和突触动力学模型,以证明 - 只要我们保持神经元和突触的时间表的大量分裂,Pavlov机制就会自发地发生并最终产生至恢复Hebbian内核的突触重量。
translated by 谷歌翻译
在神经网络的文献中,Hebbian学习传统上是指Hopfield模型及其概括存储原型的程序(即仅经历过一次形成突触矩阵的确定模式)。但是,机器学习中的“学习”一词是指机器从提供的数据集中提取功能的能力(例如,由这些原型的模糊示例制成),以制作自己的不可用原型的代表。在这里,给定一个示例示例,我们定义了一个有监督的学习协议,通过该协议可以通过该协议来推断原型,并检测到正确的控制参数(包括数据集的大小和质量)以描绘系统性能的相图。我们还证明,对于无结构数据集,配备了该监督学习规则的Hopfield模型等同于受限的Boltzmann机器,这表明了最佳且可解释的培训例程。最后,这种方法被推广到结构化的数据集:我们在分析的数据集中突出显示了一个准剥离组织(让人联想到复制对称性 - 对称性),因此,我们为其(部分)分开,为其(部分)删除层引入了一个附加的“复制性隐藏层”,该证明可以将MNIST分类从75%提高到95%,并提供有关深度体系结构的新观点。
translated by 谷歌翻译
我们考虑受限制的Boltzmann机器(RBMS)在非结构化的数据集上培训,由虚构的数据集进行,该数据集由明确的模糊但不可用的“原型”,我们表明,RBM可以学习原型的临界样本大小,即机器可以成功播放作为一种生成模型或作为分类器,根据操作程序。通常,评估关键的样本大小(可能与数据集的质量相关)仍然是机器学习中的一个开放问题。在这里,限制随机理论,其中浅网络就足够了,大母细胞场景是正确的,我们利用RBM和Hopfield网络之间的正式等价,以获得突出区域中突出区域的神经架构的相图控制参数(即,原型的数量,训练集的训练集的神经元数量,大小和质量的数量),其中可以实现学习。我们的调查是通过基于无序系统的统计学机械的分析方法领导的,结果通过广泛的蒙特卡罗模拟进一步证实。
translated by 谷歌翻译
Objective: Accurate visual classification of bladder tissue during Trans-Urethral Resection of Bladder Tumor (TURBT) procedures is essential to improve early cancer diagnosis and treatment. During TURBT interventions, White Light Imaging (WLI) and Narrow Band Imaging (NBI) techniques are used for lesion detection. Each imaging technique provides diverse visual information that allows clinicians to identify and classify cancerous lesions. Computer vision methods that use both imaging techniques could improve endoscopic diagnosis. We address the challenge of tissue classification when annotations are available only in one domain, in our case WLI, and the endoscopic images correspond to an unpaired dataset, i.e. there is no exact equivalent for every image in both NBI and WLI domains. Method: We propose a semi-surprised Generative Adversarial Network (GAN)-based method composed of three main components: a teacher network trained on the labeled WLI data; a cycle-consistency GAN to perform unpaired image-to-image translation, and a multi-input student network. To ensure the quality of the synthetic images generated by the proposed GAN we perform a detailed quantitative, and qualitative analysis with the help of specialists. Conclusion: The overall average classification accuracy, precision, and recall obtained with the proposed method for tissue classification are 0.90, 0.88, and 0.89 respectively, while the same metrics obtained in the unlabeled domain (NBI) are 0.92, 0.64, and 0.94 respectively. The quality of the generated images is reliable enough to deceive specialists. Significance: This study shows the potential of using semi-supervised GAN-based classification to improve bladder tissue classification when annotations are limited in multi-domain data.
translated by 谷歌翻译
The receptive field (RF), which determines the region of time series to be ``seen'' and used, is critical to improve the performance for time series classification (TSC). However, the variation of signal scales across and within time series data, makes it challenging to decide on proper RF sizes for TSC. In this paper, we propose a dynamic sparse network (DSN) with sparse connections for TSC, which can learn to cover various RF without cumbersome hyper-parameters tuning. The kernels in each sparse layer are sparse and can be explored under the constraint regions by dynamic sparse training, which makes it possible to reduce the resource cost. The experimental results show that the proposed DSN model can achieve state-of-art performance on both univariate and multivariate TSC datasets with less than 50\% computational cost compared with recent baseline methods, opening the path towards more accurate resource-aware methods for time series analyses. Our code is publicly available at: https://github.com/QiaoXiao7282/DSN.
translated by 谷歌翻译
While the problem of hallucinations in neural machine translation has long been recognized, so far the progress on its alleviation is very little. Indeed, recently it turned out that without artificially encouraging models to hallucinate, previously existing methods fall short and even the standard sequence log-probability is more informative. It means that characteristics internal to the model can give much more information than we expect, and before using external models and measures, we first need to ask: how far can we go if we use nothing but the translation model itself ? We propose to use a method that evaluates the percentage of the source contribution to a generated translation. Intuitively, hallucinations are translations "detached" from the source, hence they can be identified by low source contribution. This method improves detection accuracy for the most severe hallucinations by a factor of 2 and is able to alleviate hallucinations at test time on par with the previous best approach that relies on external models. Next, if we move away from internal model characteristics and allow external tools, we show that using sentence similarity from cross-lingual embeddings further improves these results.
translated by 谷歌翻译
We pose video object segmentation as spectral graph clustering in space and time, with one graph node for each pixel and edges forming local space-time neighborhoods. We claim that the strongest cluster in this video graph represents the salient object. We start by introducing a novel and efficient method based on 3D filtering for approximating the spectral solution, as the principal eigenvector of the graph's adjacency matrix, without explicitly building the matrix. This key property allows us to have a fast parallel implementation on GPU, orders of magnitude faster than classical approaches for computing the eigenvector. Our motivation for a spectral space-time clustering approach, unique in video semantic segmentation literature, is that such clustering is dedicated to preserving object consistency over time, which we evaluate using our novel segmentation consistency measure. Further on, we show how to efficiently learn the solution over multiple input feature channels. Finally, we extend the formulation of our approach beyond the segmentation task, into the realm of object tracking. In extensive experiments we show significant improvements over top methods, as well as over powerful ensembles that combine them, achieving state-of-the-art on multiple benchmarks, both for tracking and segmentation.
translated by 谷歌翻译
Metric Elicitation (ME) is a framework for eliciting classification metrics that better align with implicit user preferences based on the task and context. The existing ME strategy so far is based on the assumption that users can most easily provide preference feedback over classifier statistics such as confusion matrices. This work examines ME, by providing a first ever implementation of the ME strategy. Specifically, we create a web-based ME interface and conduct a user study that elicits users' preferred metrics in a binary classification setting. We discuss the study findings and present guidelines for future research in this direction.
translated by 谷歌翻译
Learning-based image compression has improved to a level where it can outperform traditional image codecs such as HEVC and VVC in terms of coding performance. In addition to good compression performance, device interoperability is essential for a compression codec to be deployed, i.e., encoding and decoding on different CPUs or GPUs should be error-free and with negligible performance reduction. In this paper, we present a method to solve the device interoperability problem of a state-of-the-art image compression network. We implement quantization to entropy networks which output entropy parameters. We suggest a simple method which can ensure cross-platform encoding and decoding, and can be implemented quickly with minor performance deviation, of 0.3% BD-rate, from floating point model results.
translated by 谷歌翻译
Producing high-quality forecasts of key climate variables such as temperature and precipitation on subseasonal time scales has long been a gap in operational forecasting. Recent studies have shown promising results using machine learning (ML) models to advance subseasonal forecasting (SSF), but several open questions remain. First, several past approaches use the average of an ensemble of physics-based forecasts as an input feature of these models. However, ensemble forecasts contain information that can aid prediction beyond only the ensemble mean. Second, past methods have focused on average performance, whereas forecasts of extreme events are far more important for planning and mitigation purposes. Third, climate forecasts correspond to a spatially-varying collection of forecasts, and different methods account for spatial variability in the response differently. Trade-offs between different approaches may be mitigated with model stacking. This paper describes the application of a variety of ML methods used to predict monthly average precipitation and two meter temperature using physics-based predictions (ensemble forecasts) and observational data such as relative humidity, pressure at sea level, or geopotential height, two weeks in advance for the whole continental United States. Regression, quantile regression, and tercile classification tasks using linear models, random forests, convolutional neural networks, and stacked models are considered. The proposed models outperform common baselines such as historical averages (or quantiles) and ensemble averages (or quantiles). This paper further includes an investigation of feature importance, trade-offs between using the full ensemble or only the ensemble average, and different modes of accounting for spatial variability.
translated by 谷歌翻译