非凸优化的传统分析通常取决于平滑度的假设,即要求梯度为Lipschitz。但是,最近的证据表明,这种平滑度条件并未捕获一些深度学习目标功能的特性,包括涉及复发性神经网络和LSTM的函数。取而代之的是,他们满足了更轻松的状况,并具有潜在的无界光滑度。在这个轻松的假设下,从理论和经验上表明,倾斜的SGD比香草具有优势。在本文中,我们表明,在解决此类情况时,剪辑对于ADAM型算法是不可或缺的:从理论上讲,我们证明了广义标志GD算法可以获得与带有剪辑的SGD相似的收敛速率,但根本不需要显式剪辑。一端的这个算法家族恢复了符号,另一端与受欢迎的亚当算法非常相似。我们的分析强调了动量在分析符号类型和ADAM型算法中发挥作用的关键作用:它不仅降低了噪声的影响,因此在先前的符号分析中消除了大型迷你批次的需求显着降低了无界平滑度和梯度规范的影响。我们还将这些算法与流行的优化器进行了比较,在一组深度学习任务上,观察到我们可以在击败其他人的同时匹配亚当的性能。
translated by 谷歌翻译
分位数(更普遍,KL)遗憾的界限,例如由癌症(Chaudhuri,Freund和Hsu 2009)及其变体实现的界限,放松了竞争最佳个别专家的目标,只能争夺大多数专家对抗性数据。最近,通过考虑可能既完全对抗或随机(i.i.D.),半对抗拉利范式(Bilodeau,Negrea和Roy 2020)提供了对抗性在线学习的替代放松。我们使用FTRL与单独的,新颖的根对数常规常规程序一起实现SIMIMAX最佳遗憾,这两者都可以解释为QuanchEdge的屈服变体。我们扩展了现有的KL遗憾的上限,统一地持有目标分布,可能是具有任意前锋的不可数专家课程;在有限的专家课程(紧密)上为Simitile遗憾提供第一个全信息下限;并为半逆势范式提供适应性最低的最低限度最佳算法,其适应真实,未知的约束更快,导致在现有方法上均匀改进遗憾。
translated by 谷歌翻译
统计中的一个经典问题是对样品对随机变量的预期估计。这引起了导出浓度不平等和置信序列的紧密联系的问题,即随着时间的推移均匀保持的置信区间。Jun和Orabona [Colt'19]已经展示了如何轻松将在线投注算法的遗憾保证转化为时均匀的集中度不平等。在本文中,我们表明我们可以进一步发展:我们表明,普遍投资组合算法的遗憾引起了新的隐式时间均匀浓度和最先进的经验计算出的置信序列。特别是,即使使用单个样本,我们的数值获得的置信序列也永远不会空置,并满足迭代对数定律。
translated by 谷歌翻译
具有动量(SGDM)的SGD是一种广泛使用的算法系列,用于大规模优化机器学习问题。但是,当优化通用凸功能时,任何SGDM算法都不知道与普通SGD相比。此外,即使最近的结果也需要更改SGDM算法,例如平均迭代元素和对有限域的投影,这些域很少在实践中使用。在本文中,我们关注SGDM最后一次迭代的收敛速率。我们第一次证明,对于任何恒定的动量因素,都存在Lipschitz和凸功能,SGDM的最后一次迭代均具有$ \ omega的次优收敛速率(\ frac {\ ln t} {\ ln t} {\ sqrt {\ sqrt { $ t $迭代后的t}})$。基于这一事实,我们研究了一类(自适应和非自适应)遵循基于调查的领导者的SGDM算法,并随着动量的增加和缩小的更新而进行。对于这些算法,我们表明,最后一个迭代具有最佳收敛$ O(\ frac {1} {\ sqrt {t}})$,用于无约束的凸随机优化问题,而没有投影到有限域的域也没有$ t $的知识。此外,当与自适应步骤一起使用时,我们显示了基于FTRL的SGDM的各种结果。也显示了经验结果。
translated by 谷歌翻译
Continual Learning (CL) is a field dedicated to devise algorithms able to achieve lifelong learning. Overcoming the knowledge disruption of previously acquired concepts, a drawback affecting deep learning models and that goes by the name of catastrophic forgetting, is a hard challenge. Currently, deep learning methods can attain impressive results when the data modeled does not undergo a considerable distributional shift in subsequent learning sessions, but whenever we expose such systems to this incremental setting, performance drop very quickly. Overcoming this limitation is fundamental as it would allow us to build truly intelligent systems showing stability and plasticity. Secondly, it would allow us to overcome the onerous limitation of retraining these architectures from scratch with the new updated data. In this thesis, we tackle the problem from multiple directions. In a first study, we show that in rehearsal-based techniques (systems that use memory buffer), the quantity of data stored in the rehearsal buffer is a more important factor over the quality of the data. Secondly, we propose one of the early works of incremental learning on ViTs architectures, comparing functional, weight and attention regularization approaches and propose effective novel a novel asymmetric loss. At the end we conclude with a study on pretraining and how it affects the performance in Continual Learning, raising some questions about the effective progression of the field. We then conclude with some future directions and closing remarks.
translated by 谷歌翻译
Computational units in artificial neural networks follow a simplified model of biological neurons. In the biological model, the output signal of a neuron runs down the axon, splits following the many branches at its end, and passes identically to all the downward neurons of the network. Each of the downward neurons will use their copy of this signal as one of many inputs dendrites, integrate them all and fire an output, if above some threshold. In the artificial neural network, this translates to the fact that the nonlinear filtering of the signal is performed in the upward neuron, meaning that in practice the same activation is shared between all the downward neurons that use that signal as their input. Dendrites thus play a passive role. We propose a slightly more complex model for the biological neuron, where dendrites play an active role: the activation in the output of the upward neuron becomes optional, and instead the signals going through each dendrite undergo independent nonlinear filterings, before the linear combination. We implement this new model into a ReLU computational unit and discuss its biological plausibility. We compare this new computational unit with the standard one and describe it from a geometrical point of view. We provide a Keras implementation of this unit into fully connected and convolutional layers and estimate their FLOPs and weights change. We then use these layers in ResNet architectures on CIFAR-10, CIFAR-100, Imagenette, and Imagewoof, obtaining performance improvements over standard ResNets up to 1.73%. Finally, we prove a universal representation theorem for continuous functions on compact sets and show that this new unit has more representational power than its standard counterpart.
translated by 谷歌翻译
Detecting anomalous data within time series is a very relevant task in pattern recognition and machine learning, with many possible applications that range from disease prevention in medicine, e.g., detecting early alterations of the health status before it can clearly be defined as "illness" up to monitoring industrial plants. Regarding this latter application, detecting anomalies in an industrial plant's status firstly prevents serious damages that would require a long interruption of the production process. Secondly, it permits optimal scheduling of maintenance interventions by limiting them to urgent situations. At the same time, they typically follow a fixed prudential schedule according to which components are substituted well before the end of their expected lifetime. This paper describes a case study regarding the monitoring of the status of Laser-guided Vehicles (LGVs) batteries, on which we worked as our contribution to project SUPER (Supercomputing Unified Platform, Emilia Romagna) aimed at establishing and demonstrating a regional High-Performance Computing platform that is going to represent the main Italian supercomputing environment for both computing power and data volume.
translated by 谷歌翻译
Recent object detection models for infrared (IR) imagery are based upon deep neural networks (DNNs) and require large amounts of labeled training imagery. However, publicly-available datasets that can be used for such training are limited in their size and diversity. To address this problem, we explore cross-modal style transfer (CMST) to leverage large and diverse color imagery datasets so that they can be used to train DNN-based IR image based object detectors. We evaluate six contemporary stylization methods on four publicly-available IR datasets - the first comparison of its kind - and find that CMST is highly effective for DNN-based detectors. Surprisingly, we find that existing data-driven methods are outperformed by a simple grayscale stylization (an average of the color channels). Our analysis reveals that existing data-driven methods are either too simplistic or introduce significant artifacts into the imagery. To overcome these limitations, we propose meta-learning style transfer (MLST), which learns a stylization by composing and tuning well-behaved analytic functions. We find that MLST leads to more complex stylizations without introducing significant image artifacts and achieves the best overall detector performance on our benchmark datasets.
translated by 谷歌翻译
Objective: Accurate visual classification of bladder tissue during Trans-Urethral Resection of Bladder Tumor (TURBT) procedures is essential to improve early cancer diagnosis and treatment. During TURBT interventions, White Light Imaging (WLI) and Narrow Band Imaging (NBI) techniques are used for lesion detection. Each imaging technique provides diverse visual information that allows clinicians to identify and classify cancerous lesions. Computer vision methods that use both imaging techniques could improve endoscopic diagnosis. We address the challenge of tissue classification when annotations are available only in one domain, in our case WLI, and the endoscopic images correspond to an unpaired dataset, i.e. there is no exact equivalent for every image in both NBI and WLI domains. Method: We propose a semi-surprised Generative Adversarial Network (GAN)-based method composed of three main components: a teacher network trained on the labeled WLI data; a cycle-consistency GAN to perform unpaired image-to-image translation, and a multi-input student network. To ensure the quality of the synthetic images generated by the proposed GAN we perform a detailed quantitative, and qualitative analysis with the help of specialists. Conclusion: The overall average classification accuracy, precision, and recall obtained with the proposed method for tissue classification are 0.90, 0.88, and 0.89 respectively, while the same metrics obtained in the unlabeled domain (NBI) are 0.92, 0.64, and 0.94 respectively. The quality of the generated images is reliable enough to deceive specialists. Significance: This study shows the potential of using semi-supervised GAN-based classification to improve bladder tissue classification when annotations are limited in multi-domain data.
translated by 谷歌翻译
Neural image classifiers are known to undergo severe performance degradation when exposed to input that exhibits covariate-shift with respect to the training distribution. Successful hand-crafted augmentation pipelines aim at either approximating the expected test domain conditions or to perturb the features that are specific to the training environment. The development of effective pipelines is typically cumbersome, and produce transformations whose impact on the classifier performance are hard to understand and control. In this paper, we show that recent Text-to-Image (T2I) generators' ability to simulate image interventions via natural-language prompts can be leveraged to train more robust models, offering a more interpretable and controllable alternative to traditional augmentation methods. We find that a variety of prompting mechanisms are effective for producing synthetic training data sufficient to achieve state-of-the-art performance in widely-adopted domain-generalization benchmarks and reduce classifiers' dependency on spurious features. Our work suggests that further progress in T2I generation and a tighter integration with other research fields may represent a significant step towards the development of more robust machine learning systems.
translated by 谷歌翻译