营销活动是一系列战略活动,可以促进企业的目标。在真正的工业场景中,营销活动的效果预测非常复杂且具有挑战性,因为通常从观察数据中学到了先验知识,而没有任何营销活动干预。此外,每个主题始终在几个营销活动的干预下同时受到干扰。因此,我们无法轻松解析和评估单个营销活动的效果。据我们所知,目前尚无有效的方法来解决此类问题,即,基于具有多个相互缠绕事件的层次结构对个体级别的预测任务进行建模。在本文中,我们对效果预测任务中涉及的基础解析树的结构进行了深入的分析,并进一步建立了一个层次结构胶囊预测网络(HAPNET)来预测营销活动的影响。基于合成数据和实际数据的广泛结果证明了我们模型比最新方法的优越性,并在实际工业应用中表现出显着的实用性。
translated by 谷歌翻译
由于推荐系统(RS)在指导客户进行购买中的关键作用,因此有自然的动力,不道德的政党为利润做出欺骗。在本文中,我们研究了先令攻击,在该攻击中,对抗方为不适当的目的注入了许多假用户配置文件。常规的先令攻击方法缺乏攻击性转移性(即,攻击对某些受害者RS模型无效)和/或攻击隐形性(即,很容易检测到注射的配置文件)。为了克服这些问题,我们提出了基于生成对抗网络的新型攻击模型。 Leg-Up从采样``模板''中从真实用户那里学习用户行为模式,并构建了伪造的用户配置文件。为了模拟真实的用户,Lige-Up中的发电机直接输出离散评级。为了增强攻击传递性,通过在替代RS模型上最大化攻击性能来优化生成器的参数。为了提高攻击的隐形性,Leg-Up采用歧视器来指导发电机生成无法检测到的假用户配置文件。基准测试的实验表明,在广泛的受害者RS模型上,腿部超过了最先进的先令攻击方法。我们工作的源代码可在以下网址提供:https://github.com/xmudm/shillingattack。
translated by 谷歌翻译
基于传感器的相机识别(SCI)方法的性能严重依赖于估计光响应非均匀性(PRNU)的去噪滤波器。鉴于各种对提高提取的PRNU质量的尝试,它仍然存在于低分辨率图像和高计算需求中的不令人满意的性能。利用PRNU估计和图像去噪的相似性,利用了基于PRNU提取的卷积神经网络(CNN)的最新成就。本文在公共“德累斯顿图像数据库”上对SCI性能进行了对比较评估。我们的研究结果是两倍。从一个方面,来自图像内容的PRNU提取和图像去噪分开噪声。因此,如果仔细培训,SCI可以从最近的CNN Denoisers受益。从另一方面,PRNU提取和图像去噪的目标和场景是不同的,因为一个优化噪声质量和另一个优化图像质量。当CNN Denoisers用于PRNU估计时,需要精心定制的培训。理论上和实际评估培训数据准备和损失功能设计的替代策略。我们指出,用图像 - PRNU对喂养CNN,并以基于相关的损耗函数训练它们导致最好的PRNU估计性能。为了便于对SCI的进一步研究,我们还提出了一种最小损失相机指纹量化方案,我们使用该量化方案将指纹保存为PNG格式的图像文件。此外,我们从“德累斯顿图像数据库”公开可用的相机的量化指纹。
translated by 谷歌翻译
交通预测在智能交通系统中很重要,有利于交通安全,但由于现实世界交通系统中的复杂和动态的时空依赖性,这是非常具有挑战性的。先前的方法使用预定义或学习的静态图来提取空间相关性。但是,基于静态图形的方法无法挖掘交通网络的演变。研究人员随后为每次切片生成动态图形以反映空间相关性的变化,但它们遵循独立建模的时空依赖性的范例,忽略了串行空间影响。在本文中,我们提出了一种新的基于跨时动态图形的深度学习模型,名为CDGNet,用于交通预测。该模型能够通过利用横行动态图来有效地捕获每个时切片和其历史时片之间的串联空间依赖性。同时,我们设计了稀疏横行动态图的浇注机制,符合现实世界中的稀疏空间相关性。此外,我们提出了一种新颖的编码器解码器架构,用于结合基于交叉时间动态图形的GCN,用于多步行量预测。三个现实世界公共交通数据集的实验结果表明CDGNET优于最先进的基线。我们还提供了一种定性研究来分析我们建筑的有效性。
translated by 谷歌翻译
本文介绍了WenetsPeech,一个由10000多小时的高质量标记语音组成的多域普通话语料库,2400多小时弱贴言论,大约100万小时的语音,总共22400多小时。我们收集来自YouTube和Podcast的数据,涵盖各种演讲样式,场景,域名,主题和嘈杂的条件。引入了基于光学字符识别(OCR)的方法,以在其对应的视频字幕上为YouTube数据生成音频/文本分段候选,而高质量的ASR转录系统用于为播客数据生成音频/文本对候选。然后我们提出了一种新的端到端标签错误检测方法,可以进一步验证和过滤候选者。我们还提供三个手动标记的高质量测试集,以及WenetsPeech进行评估 - 开发用于训练中的交叉验证目的,从互联网收集的匹配测试,并从真实会议中记录的测试\ _MEETING,以获得更具挑战性的不匹配测试。使用有线exeeEX培训的基线系统,用于三个流行的语音识别工具包,即Kaldi,Espnet和Wenet,以及三个测试集的识别结果也被提供为基准。据我们所知,WenetsPeech是目前最大的开放式普通话语音语料库,其中有利于生产级语音识别的研究。
translated by 谷歌翻译
价格运动的预测旨在根据当前的市场条件和其他相关信息来预测金融资产的未来趋势。最近,机器学习(ML)方法已经变得越来越流行,并在学术界和工业中都取得了预测的有希望的结果。大多数现有的ML解决方案将预测问题作为分类(预测方向)或回归(以预测回报)问题,以期在整个培训数据集中。但是,由于财务数据的信噪比和随机性质极低,良好的交易机会极为稀缺。结果,如果没有仔细选择潜在的有利可图的样本,这种ML方法容易捕获噪声而不是真实信号的模式。为了解决这个问题,我们提出了一个新颖的价格变动预测框架,称为“地方意识到的关注和迭代精致标签”(LARA),由两个主要组成部分组成:1)局部意识 - 引起关注会自动提取潜在的有利可图的样品,以通过到周围的周围来提取。班级感知标签信息。此外,配备了公制学习技术,当地意识到的注意力享受特定于任务的距离指标,并以更有效的方式分散了对潜在有利可图的样本的关注。 2)迭代精致标签进一步迭代地完善了嘈杂样品的标签,然后结合了学到的预测因子,使其与看不见和嘈杂的样品相结合。在对三个现实世界金融市场的许多实验中:ETF,股票和加密货币,Lara与传统的时间序列分析方法和QLIB平台上的一组基于机器的竞争对手相比,取得了卓越的性能。广泛的消融研究和实验还表明,拉拉确实捕获了更可靠的交易机会。
translated by 谷歌翻译
In this paper we explore the task of modeling (semi) structured object sequences; in particular we focus our attention on the problem of developing a structure-aware input representation for such sequences. In such sequences, we assume that each structured object is represented by a set of key-value pairs which encode the attributes of the structured object. Given a universe of keys, a sequence of structured objects can then be viewed as an evolution of the values for each key, over time. We encode and construct a sequential representation using the values for a particular key (Temporal Value Modeling - TVM) and then self-attend over the set of key-conditioned value sequences to a create a representation of the structured object sequence (Key Aggregation - KA). We pre-train and fine-tune the two components independently and present an innovative training schedule that interleaves the training of both modules with shared attention heads. We find that this iterative two part-training results in better performance than a unified network with hierarchical encoding as well as over, other methods that use a {\em record-view} representation of the sequence \cite{de2021transformers4rec} or a simple {\em flattened} representation of the sequence. We conduct experiments using real-world data to demonstrate the advantage of interleaving TVM-KA on multiple tasks and detailed ablation studies motivating our modeling choices. We find that our approach performs better than flattening sequence objects and also allows us to operate on significantly larger sequences than existing methods.
translated by 谷歌翻译
Increasing research interests focus on sequential recommender systems, aiming to model dynamic sequence representation precisely. However, the most commonly used loss function in state-of-the-art sequential recommendation models has essential limitations. To name a few, Bayesian Personalized Ranking (BPR) loss suffers the vanishing gradient problem from numerous negative sampling and predictionbiases; Binary Cross-Entropy (BCE) loss subjects to negative sampling numbers, thereby it is likely to ignore valuable negative examples and reduce the training efficiency; Cross-Entropy (CE) loss only focuses on the last timestamp of the training sequence, which causes low utilization of sequence information and results in inferior user sequence representation. To avoid these limitations, in this paper, we propose to calculate Cumulative Cross-Entropy (CCE) loss over the sequence. CCE is simple and direct, which enjoys the virtues of painless deployment, no negative sampling, and effective and efficient training. We conduct extensive experiments on five benchmark datasets to demonstrate the effectiveness and efficiency of CCE. The results show that employing CCE loss on three state-of-the-art models GRU4Rec, SASRec, and S3-Rec can reach 125.63%, 69.90%, and 33.24% average improvement of full ranking NDCG@5, respectively. Using CCE, the performance curve of the models on the test data increases rapidly with the wall clock time, and is superior to that of other loss functions in almost the whole process of model training.
translated by 谷歌翻译
Deploying reliable deep learning techniques in interdisciplinary applications needs learned models to output accurate and ({even more importantly}) explainable predictions. Existing approaches typically explicate network outputs in a post-hoc fashion, under an implicit assumption that faithful explanations come from accurate predictions/classifications. We have an opposite claim that explanations boost (or even determine) classification. That is, end-to-end learning of explanation factors to augment discriminative representation extraction could be a more intuitive strategy to inversely assure fine-grained explainability, e.g., in those neuroimaging and neuroscience studies with high-dimensional data containing noisy, redundant, and task-irrelevant information. In this paper, we propose such an explainable geometric deep network dubbed as NeuroExplainer, with applications to uncover altered infant cortical development patterns associated with preterm birth. Given fundamental cortical attributes as network input, our NeuroExplainer adopts a hierarchical attention-decoding framework to learn fine-grained attentions and respective discriminative representations to accurately recognize preterm infants from term-born infants at term-equivalent age. NeuroExplainer learns the hierarchical attention-decoding modules under subject-level weak supervision coupled with targeted regularizers deduced from domain knowledge regarding brain development. These prior-guided constraints implicitly maximizes the explainability metrics (i.e., fidelity, sparsity, and stability) in network training, driving the learned network to output detailed explanations and accurate classifications. Experimental results on the public dHCP benchmark suggest that NeuroExplainer led to quantitatively reliable explanation results that are qualitatively consistent with representative neuroimaging studies.
translated by 谷歌翻译
The utilization of large-scale distributed renewable energy promotes the development of the multi-microgrid (MMG), which raises the need of developing an effective energy management method to minimize economic costs and keep self energy-sufficiency. The multi-agent deep reinforcement learning (MADRL) has been widely used for the energy management problem because of its real-time scheduling ability. However, its training requires massive energy operation data of microgrids (MGs), while gathering these data from different MGs would threaten their privacy and data security. Therefore, this paper tackles this practical yet challenging issue by proposing a federated multi-agent deep reinforcement learning (F-MADRL) algorithm via the physics-informed reward. In this algorithm, the federated learning (FL) mechanism is introduced to train the F-MADRL algorithm thus ensures the privacy and the security of data. In addition, a decentralized MMG model is built, and the energy of each participated MG is managed by an agent, which aims to minimize economic costs and keep self energy-sufficiency according to the physics-informed reward. At first, MGs individually execute the self-training based on local energy operation data to train their local agent models. Then, these local models are periodically uploaded to a server and their parameters are aggregated to build a global agent, which will be broadcasted to MGs and replace their local agents. In this way, the experience of each MG agent can be shared and the energy operation data is not explicitly transmitted, thus protecting the privacy and ensuring data security. Finally, experiments are conducted on Oak Ridge national laboratory distributed energy control communication lab microgrid (ORNL-MG) test system, and the comparisons are carried out to verify the effectiveness of introducing the FL mechanism and the outperformance of our proposed F-MADRL.
translated by 谷歌翻译