我们介绍Softmax梯度篡改,一种用于修改神经网络后向通过的梯度的技术,以提高其准确性。我们的方法使用基于功率的概率变换来改变预测的概率值,然后将梯度重新计算在后向通过。这种修改导致更平滑的渐变简介,我们在经验和理论上展示。我们对剩余网络进行了转换参数进行了网格搜索。我们证明修改CUMMNET中的软MAX梯度可能导致培训准确性提高,从而增加训练数据的适合,并最大限度地利用神经网络的学习能力。当与标签平滑等正则化技术相结合时,我们获得更好的测试度量和更低的泛化间隙。 Softmax渐变篡改在ImageNet DataSet上的基线上以0.52 \%$ 0.52 \%$ 0.52 \%$ 0.52 \%。我们的方法非常通用,可以跨各种不同的网络架构和数据集使用。
translated by 谷歌翻译
虹膜呈现攻击检测(iPad)对于确保个人身份至关重要是广泛使用的虹膜识别系统。然而,由于在不受约束的环境中捕获和攻击样本之间的高视觉相关性,现有的iPad算法不会概括到看不见和跨域场景。虹膜眼镜图像复杂纹理和形态模式的这些相似之处进一步促进了性能降解。为了减轻这些缺点,本文提出了DFCanet:密集特征校准和注意力引导网络,其校准了与全球位于全球位于局部涂抹的虹膜模式。从特征校准卷积和剩余学习中振衡优势,DFCanet会生成特定于域的IRIS特征表示。由于校准特征映射中的一些通道包含更突出的信息,因此我们通过通道注意机制利用频道跨越渠道的鉴别特征学习。为了加强挑战我们所提出的模型,我们使DFCanet通过非统一和非归一化的眼虹膜图像运行。在挑战性跨域和域内场景中进行的广泛实验突出了一致的表现优势。与最先进的方法相比,DFCanet分别实现了基准IIITD CLI,IIIT CSD和NDCLD13数据库的性能显着提升。此外,已经引入了一种新的基于增量学习的方法,以克服解散的虹膜数据特征和数据稀缺。本文还追求了在各种跨域协议下进行评估的攻击类别下进行软镜头的具有挑战性的情景。该代码将公开可用。
translated by 谷歌翻译
2019年新型冠状病毒疾病(Covid-19)是一种致命的传染病,于2019年12月在中国武汉武汉(Wuhan)首次识别,并且一直处于流行状态。在这种情况下,在感染人群中检测到Covid-19变得越来越重要。如今,与感染人群数量相比,测试套件的数量逐渐减少。在最近的流行条件下,通过分析胸部CT(计算机断层扫描)图像诊断肺部疾病已成为COVID-19患者诊断和预言的重要工具。在这项研究中,已经提出了一种从CT图像检测COVID-19感染的转移学习策略(CNN)。在拟议的模型中,已经设计了具有转移学习模型V3的多层卷积神经网络(CNN)。与CNN类似,它使用卷积和汇总来提取功能,但是该传输学习模型包含数据集成像网的权重。因此,它可以非常有效地检测功能,从而使其在获得更好的准确性方面具有优势。
translated by 谷歌翻译
Unsupervised learning-based anomaly detection in latent space has gained importance since discriminating anomalies from normal data becomes difficult in high-dimensional space. Both density estimation and distance-based methods to detect anomalies in latent space have been explored in the past. These methods prove that retaining valuable properties of input data in latent space helps in the better reconstruction of test data. Moreover, real-world sensor data is skewed and non-Gaussian in nature, making mean-based estimators unreliable for skewed data. Again, anomaly detection methods based on reconstruction error rely on Euclidean distance, which does not consider useful correlation information in the feature space and also fails to accurately reconstruct the data when it deviates from the training distribution. In this work, we address the limitations of reconstruction error-based autoencoders and propose a kernelized autoencoder that leverages a robust form of Mahalanobis distance (MD) to measure latent dimension correlation to effectively detect both near and far anomalies. This hybrid loss is aided by the principle of maximizing the mutual information gain between the latent dimension and the high-dimensional prior data space by maximizing the entropy of the latent space while preserving useful correlation information of the original data in the low-dimensional latent space. The multi-objective function has two goals -- it measures correlation information in the latent feature space in the form of robust MD distance and simultaneously tries to preserve useful correlation information from the original data space in the latent space by maximizing mutual information between the prior and latent space.
translated by 谷歌翻译
The usage of technologically advanced devices has seen a boom in many domains, including education, automation, and healthcare; with most of the services requiring Internet connectivity. To secure a network, device identification plays key role. In this paper, a device fingerprinting (DFP) model, which is able to distinguish between Internet of Things (IoT) and non-IoT devices, as well as uniquely identify individual devices, has been proposed. Four statistical features have been extracted from the consecutive five device-originated packets, to generate individual device fingerprints. The method has been evaluated using the Random Forest (RF) classifier and different datasets. Experimental results have shown that the proposed method achieves up to 99.8% accuracy in distinguishing between IoT and non-IoT devices and over 97.6% in classifying individual devices. These signify that the proposed method is useful in assisting operators in making their networks more secure and robust to security breaches and unauthorized access.
translated by 谷歌翻译
Multiple studies have focused on predicting the prospective popularity of an online document as a whole, without paying attention to the contributions of its individual parts. We introduce the task of proactively forecasting popularities of sentences within online news documents solely utilizing their natural language content. We model sentence-specific popularity forecasting as a sequence regression task. For training our models, we curate InfoPop, the first dataset containing popularity labels for over 1.7 million sentences from over 50,000 online news documents. To the best of our knowledge, this is the first dataset automatically created using streams of incoming search engine queries to generate sentence-level popularity annotations. We propose a novel transfer learning approach involving sentence salience prediction as an auxiliary task. Our proposed technique coupled with a BERT-based neural model exceeds nDCG values of 0.8 for proactive sentence-specific popularity forecasting. Notably, our study presents a non-trivial takeaway: though popularity and salience are different concepts, transfer learning from salience prediction enhances popularity forecasting. We release InfoPop and make our code publicly available: https://github.com/sayarghoshroy/InfoPopularity
translated by 谷歌翻译
Almost 80 million Americans suffer from hair loss due to aging, stress, medication, or genetic makeup. Hair and scalp-related diseases often go unnoticed in the beginning. Sometimes, a patient cannot differentiate between hair loss and regular hair fall. Diagnosing hair-related diseases is time-consuming as it requires professional dermatologists to perform visual and medical tests. Because of that, the overall diagnosis gets delayed, which worsens the severity of the illness. Due to the image-processing ability, neural network-based applications are used in various sectors, especially healthcare and health informatics, to predict deadly diseases like cancers and tumors. These applications assist clinicians and patients and provide an initial insight into early-stage symptoms. In this study, we used a deep learning approach that successfully predicts three main types of hair loss and scalp-related diseases: alopecia, psoriasis, and folliculitis. However, limited study in this area, unavailability of a proper dataset, and degree of variety among the images scattered over the internet made the task challenging. 150 images were obtained from various sources and then preprocessed by denoising, image equalization, enhancement, and data balancing, thereby minimizing the error rate. After feeding the processed data into the 2D convolutional neural network (CNN) model, we obtained overall training accuracy of 96.2%, with a validation accuracy of 91.1%. The precision and recall score of alopecia, psoriasis, and folliculitis are 0.895, 0.846, and 1.0, respectively. We also created a dataset of the scalp images for future prospective researchers.
translated by 谷歌翻译
To date, no "information-theoretic" frameworks for reasoning about generalization error have been shown to establish minimax rates for gradient descent in the setting of stochastic convex optimization. In this work, we consider the prospect of establishing such rates via several existing information-theoretic frameworks: input-output mutual information bounds, conditional mutual information bounds and variants, PAC-Bayes bounds, and recent conditional variants thereof. We prove that none of these bounds are able to establish minimax rates. We then consider a common tactic employed in studying gradient methods, whereby the final iterate is corrupted by Gaussian noise, producing a noisy "surrogate" algorithm. We prove that minimax rates cannot be established via the analysis of such surrogates. Our results suggest that new ideas are required to analyze gradient descent using information-theoretic techniques.
translated by 谷歌翻译
Prevailing methods for assessing and comparing generative AIs incentivize responses that serve a hypothetical representative individual. Evaluating models in these terms presumes homogeneous preferences across the population and engenders selection of agglomerative AIs, which fail to represent the diverse range of interests across individuals. We propose an alternative evaluation method that instead prioritizes inclusive AIs, which provably retain the requisite knowledge not only for subsequent response customization to particular segments of the population but also for utility-maximizing decisions.
translated by 谷歌翻译
We explore the use of large language models (LLMs) for zero-shot semantic parsing. Semantic parsing involves mapping natural language utterances to task-specific meaning representations. Language models are generally trained on the publicly available text and code and cannot be expected to directly generalize to domain-specific parsing tasks in a zero-shot setting. In this work, we propose ZEROTOP, a zero-shot task-oriented parsing method that decomposes a semantic parsing problem into a set of abstractive and extractive question-answering (QA) problems, enabling us to leverage the ability of LLMs to zero-shot answer reading comprehension questions. For each utterance, we prompt the LLM with questions corresponding to its top-level intent and a set of slots and use the LLM generations to construct the target meaning representation. We observe that current LLMs fail to detect unanswerable questions; and as a result, cannot handle questions corresponding to missing slots. To address this problem, we fine-tune a language model on public QA datasets using synthetic negative samples. Experimental results show that our QA-based decomposition paired with the fine-tuned LLM can correctly parse ~16% of utterances in the MTOP dataset without requiring any annotated data.
translated by 谷歌翻译