深信仰网络(DBN)是随机神经网络,可以从感觉数据中提取丰富的环境内部表示。 DBN在触发深度学习革命方面具有催化作用,这是第一次证明在具有许多隐藏神经元层的网络中无监督学习的可行性。由于它们的生物学和认知合理性,这些等级架构也已成功利用,以在各种领域建立人类感知和认知的计算模型。但是,DBN的学习通常是以贪婪的,层次的方式进行的,这不允许模拟皮质回路的整体发展。在这里,我们提出IDBN,这是一种迭代学习算法,用于DBN,允许共同更新层次结构所有层的连接权重。我们在两组不同的视觉刺激上测试算法,我们表明网络开发也可以通过图理论属性来跟踪。使用我们的迭代方法训练的DBN实现了与贪婪对应物相当的最终性能,同时允许准确地分析生成模型中内部表示的逐步发展。我们的工作为使用IDBN进行建模神经认知发展铺平了道路。
translated by 谷歌翻译
Humans and animals have the ability to continually acquire, fine-tune, and transfer knowledge and skills throughout their lifespan. This ability, referred to as lifelong learning, is mediated by a rich set of neurocognitive mechanisms that together contribute to the development and specialization of our sensorimotor skills as well as to long-term memory consolidation and retrieval. Consequently, lifelong learning capabilities are crucial for computational systems and autonomous agents interacting in the real world and processing continuous streams of information. However, lifelong learning remains a long-standing challenge for machine learning and neural network models since the continual acquisition of incrementally available information from non-stationary data distributions generally leads to catastrophic forgetting or interference. This limitation represents a major drawback for state-of-the-art deep neural network models that typically learn representations from stationary batches of training data, thus without accounting for situations in which information becomes incrementally available over time. In this review, we critically summarize the main challenges linked to lifelong learning for artificial learning systems and compare existing neural network approaches that alleviate, to different extents, catastrophic forgetting. Although significant advances have been made in domain-specific learning with neural networks, extensive research efforts are required for the development of robust lifelong learning on autonomous agents and robots. We discuss well-established and emerging research motivated by lifelong learning factors in biological systems such as structural plasticity, memory replay, curriculum and transfer learning, intrinsic motivation, and multisensory integration.
translated by 谷歌翻译
Continual Learning (CL) is a field dedicated to devise algorithms able to achieve lifelong learning. Overcoming the knowledge disruption of previously acquired concepts, a drawback affecting deep learning models and that goes by the name of catastrophic forgetting, is a hard challenge. Currently, deep learning methods can attain impressive results when the data modeled does not undergo a considerable distributional shift in subsequent learning sessions, but whenever we expose such systems to this incremental setting, performance drop very quickly. Overcoming this limitation is fundamental as it would allow us to build truly intelligent systems showing stability and plasticity. Secondly, it would allow us to overcome the onerous limitation of retraining these architectures from scratch with the new updated data. In this thesis, we tackle the problem from multiple directions. In a first study, we show that in rehearsal-based techniques (systems that use memory buffer), the quantity of data stored in the rehearsal buffer is a more important factor over the quality of the data. Secondly, we propose one of the early works of incremental learning on ViTs architectures, comparing functional, weight and attention regularization approaches and propose effective novel a novel asymmetric loss. At the end we conclude with a study on pretraining and how it affects the performance in Continual Learning, raising some questions about the effective progression of the field. We then conclude with some future directions and closing remarks.
translated by 谷歌翻译
预测性编码提供了对皮质功能的潜在统一说明 - 假设大脑的核心功能是最小化有关世界生成模型的预测错误。该理论与贝叶斯大脑框架密切相关,在过去的二十年中,在理论和认知神经科学领域都产生了重大影响。基于经验测试的预测编码的改进和扩展的理论和数学模型,以及评估其在大脑中实施的潜在生物学合理性以及该理论所做的具体神经生理学和心理学预测。尽管存在这种持久的知名度,但仍未对预测编码理论,尤其是该领域的最新发展进行全面回顾。在这里,我们提供了核心数学结构和预测编码的逻辑的全面综述,从而补充了文献中最新的教程。我们还回顾了该框架中的各种经典和最新工作,从可以实施预测性编码的神经生物学现实的微电路到预测性编码和广泛使用的错误算法的重新传播之间的紧密关系,以及对近距离的调查。预测性编码和现代机器学习技术之间的关系。
translated by 谷歌翻译
神经生成模型可用于学习从数据的复杂概率分布,从它们中进行采样,并产生概率密度估计。我们提出了一种用于开发由大脑预测处理理论启发的神经生成模型的计算框架。根据预测加工理论,大脑中的神经元形成一个层次结构,其中一个级别的神经元形成关于来自另一个层次的感觉输入的期望。这些神经元根据其期望与观察到的信号之间的差异更新其本地模型。以类似的方式,我们的生成模型中的人造神经元预测了邻近的神经元的作用,并根据预测匹配现实的程度来调整它们的参数。在这项工作中,我们表明,在我们的框架内学到的神经生成模型在练习中跨越多个基准数据集和度量来表现良好,并且保持竞争或显着优于具有类似功能的其他生成模型(例如变形自动编码器)。
translated by 谷歌翻译
在基于人工神经网络的终身学习系统中,最大的障碍之一是在遇到新信息时无法保留旧知识。这种现象被称为灾难性遗忘。在本文中,我们提出了一种新型的连接主义架构,即顺序的神经编码网络,在从数据点流中学习时忘记了,并且与当今的网络不同,它不会通过流行的错误反向传播来学习。基于预测性处理的神经认知理论,我们的模型以生物学上可行的方式适应了突触,而另一个神经系统学会了指导和控制这种类似皮层的结构,模仿了一些基础神经节的某些任务连续控制功能。在我们的实验中,我们证明了与标准神经模型相比,我们的自组织系统经历的遗忘大大降低,表现优于先前提出的方法,包括基于排练/数据缓冲的方法,包括标准(SplitMnist,SplitMnist,Split Mnist等) 。)和定制基准测试,即使以溪流式的方式进行了训练。我们的工作提供了证据表明,在实际神经元系统中模仿机制,例如本地学习,横向竞争,可以产生新的方向和可能性,以应对终身机器学习的巨大挑战。
translated by 谷歌翻译
We explore an original strategy for building deep networks, based on stacking layers of denoising autoencoders which are trained locally to denoise corrupted versions of their inputs. The resulting algorithm is a straightforward variation on the stacking of ordinary autoencoders. It is however shown on a benchmark of classification problems to yield significantly lower classification error, thus bridging the performance gap with deep belief networks (DBN), and in several cases surpassing it. Higher level representations learnt in this purely unsupervised fashion also help boost the performance of subsequent SVM classifiers. Qualitative experiments show that, contrary to ordinary autoencoders, denoising autoencoders are able to learn Gabor-like edge detectors from natural image patches and larger stroke detectors from digit images. This work clearly establishes the value of using a denoising criterion as a tractable unsupervised objective to guide the learning of useful higher level representations.
translated by 谷歌翻译
The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, auto-encoders, manifold learning, and deep networks. This motivates longer-term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation and manifold learning.
translated by 谷歌翻译
人类可以通过最小的相互干扰连续学习几项任务,但一次接受多个任务进行培训时的表现较差。标准深神经网络相反。在这里,我们提出了针对人工神经网络的新型计算限制,灵感来自灵长类动物前额叶皮层的较​​早作品,以捕获交织训练的成本,并允许网络在不忘记的情况下按顺序学习两个任务。我们通过两个算法主题,所谓的“呆滞”任务单元和HEBBIAN训练步骤增强了标准随机梯度下降,该步骤加强了任务单元和编码与任务相关信息的隐藏单元之间的连接。我们发现,“缓慢”的单元在培训期间引入了转换成本,该单元在交错训练下偏向表示的表示,而忽略了上下文提示的联合表示,而Hebbian步骤则促进了从任务单元到隐藏层的门控方案的形成这会产生正交表示,完全防止干扰。在先前发布的人类行为数据上验证该模型表明,它与接受过封锁或交错课程训练的参与者的表现相匹配,并且这些绩效差异是由真实类别边界的误解驱动的。
translated by 谷歌翻译
这是一门专门针对STEM学生开发的介绍性机器学习课程。我们的目标是为有兴趣的读者提供基础知识,以在自己的项目中使用机器学习,并将自己熟悉术语作为进一步阅读相关文献的基础。在这些讲义中,我们讨论受监督,无监督和强化学习。注释从没有神经网络的机器学习方法的说明开始,例如原理分析,T-SNE,聚类以及线性回归和线性分类器。我们继续介绍基本和先进的神经网络结构,例如密集的进料和常规神经网络,经常性的神经网络,受限的玻尔兹曼机器,(变性)自动编码器,生成的对抗性网络。讨论了潜在空间表示的解释性问题,并使用梦和对抗性攻击的例子。最后一部分致力于加强学习,我们在其中介绍了价值功能和政策学习的基本概念。
translated by 谷歌翻译
在2015年和2019年之间,地平线的成员2020年资助的创新培训网络名为“Amva4newphysics”,研究了高能量物理问题的先进多变量分析方法和统计学习工具的定制和应用,并开发了完全新的。其中许多方法已成功地用于提高Cern大型Hadron撞机的地图集和CMS实验所执行的数据分析的敏感性;其他几个人,仍然在测试阶段,承诺进一步提高基本物理参数测量的精确度以及新现象的搜索范围。在本文中,在研究和开发的那些中,最相关的新工具以及对其性能的评估。
translated by 谷歌翻译
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
translated by 谷歌翻译
尖峰神经网络(SNN)是大脑中低功率,耐断层的信息处理的基础,并且在适当的神经形态硬件加速器上实施时,可能构成传统深层神经网络的能力替代品。但是,实例化解决复杂的计算任务的SNN在Silico中仍然是一个重大挑战。替代梯度(SG)技术已成为培训SNN端到端的标准解决方案。尽管如此,它们的成功取决于突触重量初始化,类似于常规的人工神经网络(ANN)。然而,与ANN不同,它仍然难以捉摸地构成SNN的良好初始状态。在这里,我们为受到大脑中通常观察到的波动驱动的策略启发的SNN制定了一般初始化策略。具体而言,我们为数据依赖性权重初始化提供了实用的解决方案,以确保广泛使用的泄漏的集成和传火(LIF)神经元的波动驱动。我们从经验上表明,经过SGS培训时,SNN遵循我们的策略表现出卓越的学习表现。这些发现概括了几个数据集和SNN体系结构,包括完全连接,深度卷积,经常性和更具生物学上合理的SNN遵守Dale的定律。因此,波动驱动的初始化提供了一种实用,多功能且易于实现的策略,可改善神经形态工程和计算神经科学的不同任务的SNN培训绩效。
translated by 谷歌翻译
为了在专门的神经形态硬件中进行节能计算,我们提出了尖峰神经编码,这是基于预测性编码理论的人工神经模型家族的实例化。该模型是同类模型,它是通过在“猜测和检查”的永无止境过程中运行的,神经元可以预测彼此的活动值,然后调整自己的活动以做出更好的未来预测。我们系统的互动性,迭代性质非常适合感官流预测的连续时间表述,并且如我们所示,模型的结构产生了局部突触更新规则,可以用来补充或作为在线峰值定位的替代方案依赖的可塑性。在本文中,我们对模型的实例化进行了实例化,该模型包括泄漏的集成和火灾单元。但是,我们系统所在的框架自然可以结合更复杂的神经元,例如Hodgkin-Huxley模型。我们在模式识别方面的实验结果证明了当二进制尖峰列车是通信间通信的主要范式时,模型的潜力。值得注意的是,尖峰神经编码在分类绩效方面具有竞争力,并且在从任务序列中学习时会降低遗忘,从而提供了更经济的,具有生物学上的替代品,可用于流行的人工神经网络。
translated by 谷歌翻译
可解释的人工智能(XAI)的新兴领域旨在为当今强大但不透明的深度学习模型带来透明度。尽管本地XAI方法以归因图的形式解释了个体预测,从而确定了重要特征的发生位置(但没有提供有关其代表的信息),但全局解释技术可视化模型通常学会的编码的概念。因此,两种方法仅提供部分见解,并留下将模型推理解释的负担。只有少数当代技术旨在将本地和全球XAI背后的原则结合起来,以获取更多信息的解释。但是,这些方法通常仅限于特定的模型体系结构,或对培训制度或数据和标签可用性施加其他要求,这实际上使事后应用程序成为任意预训练的模型。在这项工作中,我们介绍了概念相关性传播方法(CRP)方法,该方法结合了XAI的本地和全球观点,因此允许回答“何处”和“ where”和“什么”问题,而没有其他约束。我们进一步介绍了相关性最大化的原则,以根据模型对模型的有用性找到代表性的示例。因此,我们提高了对激活最大化及其局限性的共同实践的依赖。我们证明了我们方法在各种环境中的能力,展示了概念相关性传播和相关性最大化导致了更加可解释的解释,并通过概念图表,概念组成分析和概念集合和概念子区和概念子区和概念子集和定量研究对模型的表示和推理提供了深刻的见解。它们在细粒度决策中的作用。
translated by 谷歌翻译
Multilayer Neural Networks trained with the backpropagation algorithm constitute the best example of a successful Gradient-Based Learning technique. Given an appropriate network architecture, Gradient-Based Learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional Neural Networks, that are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques.Real-life document recognition systems are composed of multiple modules including eld extraction, segmentation, recognition, and language modeling. A new learning paradigm, called Graph Transformer Networks (GTN), allows such multi-module systems to be trained globally using Gradient-Based methods so as to minimize an overall performance measure.Two systems for on-line handwriting recognition are described. Experiments demonstrate the advantage of global training, and the exibility of Graph Transformer Networks.A Graph Transformer Network for reading bank check is also described. It uses Convolutional Neural Network character recognizers combined with global training techniques to provides record accuracy on business and personal checks. It is deployed commercially and reads several million checks per day.
translated by 谷歌翻译
AI的一个关键挑战是构建体现的系统,该系统在动态变化的环境中运行。此类系统必须适应更改任务上下文并持续学习。虽然标准的深度学习系统实现了最先进的静态基准的结果,但它们通常在动态方案中挣扎。在这些设置中,来自多个上下文的错误信号可能会彼此干扰,最终导致称为灾难性遗忘的现象。在本文中,我们将生物学启发的架构调查为对这些问题的解决方案。具体而言,我们表明树突和局部抑制系统的生物物理特性使网络能够以特定于上下文的方式动态限制和路由信息。我们的主要贡献如下。首先,我们提出了一种新颖的人工神经网络架构,该架构将活跃的枝形和稀疏表示融入了标准的深度学习框架中。接下来,我们在需要任务的适应性的两个单独的基准上研究这种架构的性能:Meta-World,一个机器人代理必须学习同时解决各种操纵任务的多任务强化学习环境;和一个持续的学习基准,其中模型的预测任务在整个训练中都会发生变化。对两个基准的分析演示了重叠但不同和稀疏的子网的出现,允许系统流动地使用最小的遗忘。我们的神经实现标志在单一架构上第一次在多任务和持续学习设置上取得了竞争力。我们的研究揭示了神经元的生物学特性如何通知深度学习系统,以解决通常不可能对传统ANN来解决的动态情景。
translated by 谷歌翻译
人工神经网络中的监督学习通常依赖于反向传播,其中权重根据误差函数梯度进行更新,并从输出层到输入层依次传播。尽管这种方法已被证明在广泛的应用领域有效,但在许多方面缺乏生物学上的合理性,包括重量对称问题,学习对非本地信号的依赖性,错误传播期间的神经活动的冻结以及更新锁定的冻结问题。已经引入了替代培训计划,包括标志对称性,反馈对准和直接反馈对准,但它们总是依靠向后传球,这阻碍了同时解决所有问题的可能性。在这里,我们建议用第二个正向通行证替换向后通行证,其中根据网络的误差调制输入信号。我们表明,这项新颖的学习规则全面解决了上述所有问题,并且可以应用于完全连接和卷积模型。我们测试了有关MNIST,CIFAR-10和CIFAR-100的学习规则。这些结果有助于将生物学原理纳入机器学习。
translated by 谷歌翻译
Current learning machines have successfully solved hard application problems, reaching high accuracy and displaying seemingly "intelligent" behavior. Here we apply recent techniques for explaining decisions of state-of-the-art learning machines and analyze various tasks from computer vision and arcade games. This showcases a spectrum of problem-solving behaviors ranging from naive and short-sighted, to wellinformed and strategic. We observe that standard performance evaluation metrics can be oblivious to distinguishing these diverse problem solving behaviors. Furthermore, we propose our semi-automated Spectral Relevance Analysis that provides a practically effective way of characterizing and validating the behavior of nonlinear learning machines. This helps to assess whether a learned model indeed delivers reliably for the problem that it was conceived for. Furthermore, our work intends to add a voice of caution to the ongoing excitement about machine intelligence and pledges to evaluate and judge some of these recent successes in a more nuanced manner.
translated by 谷歌翻译
现代深度学习方法构成了令人难以置信的强大工具,以解决无数的挑战问题。然而,由于深度学习方法作为黑匣子运作,因此与其预测相关的不确定性往往是挑战量化。贝叶斯统计数据提供了一种形式主义来理解和量化与深度神经网络预测相关的不确定性。本教程概述了相关文献和完整的工具集,用于设计,实施,列车,使用和评估贝叶斯神经网络,即使用贝叶斯方法培训的随机人工神经网络。
translated by 谷歌翻译