With the rapid development of cloud computing, virtual machine scheduling has become one of the most important but challenging issues for the cloud computing community, especially for practical heterogeneous request sequences. By analyzing the impact of request heterogeneity on some popular heuristic schedulers, it can be found that existing scheduling algorithms can not handle the request heterogeneity properly and efficiently. In this paper, a plug-and-play virtual machine scheduling intensifier, called Resource Assigner (ReAssigner), is proposed to enhance the scheduling efficiency of any given scheduler for heterogeneous requests. The key idea of ReAssigner is to pre-assign roles to physical resources and let resources of the same role form a virtual cluster to handle homogeneous requests. ReAssigner can cooperate with arbitrary schedulers by restricting their scheduling space to virtual clusters. With evaluations on the real dataset from Huawei Cloud, the proposed ReAssigner achieves significant scheduling performance improvement compared with some state-of-the-art scheduling methods.
translated by 谷歌翻译
本文调查了2D全身人类姿势估计的任务,该任务旨在将整个人体(包括身体,脚,脸部和手)局部定位在整个人体上。我们提出了一种称为Zoomnet的单网络方法,以考虑到完整人体的层次结构,并解决不同身体部位的规模变化。我们进一步提出了一个称为Zoomnas的神经体系结构搜索框架,以促进全身姿势估计的准确性和效率。Zoomnas共同搜索模型体系结构和不同子模块之间的连接,并自动为搜索的子模块分配计算复杂性。为了训练和评估Zoomnas,我们介绍了第一个大型2D人类全身数据集,即可可叶全体V1.0,它注释了133个用于野外图像的关键点。广泛的实验证明了Zoomnas的有效性和可可叶v1.0的重要性。
translated by 谷歌翻译
人类姿势估计旨在准确估计各种人类姿势。但是,现有的数据集通常遵循长尾巴的分布,而异常姿势仅占据一小部分,这进一步导致缺乏稀有姿势的多样性。这些问题导致当前姿势估计器的概括能力。在本文中,我们提出了一种简单而有效的数据增强方法,称为姿势转化(后部),以减轻上述问题。具体而言,我们建议姿势转化模块(PTM)创建具有多种姿势并采用姿势歧视者的新训练样本,以确保增强姿势的合理性。此外,我们提出姿势聚类模块(PCM)来测量姿势稀有性并选择“最稀有”姿势,以帮助平衡长尾分布。在三个基准数据集上进行的广泛实验证明了我们方法的有效性,尤其是在稀有姿势上。同样,我们的方法是有效且易于实施的,可以轻松地集成到现有姿势估计模型的训练管道中。
translated by 谷歌翻译
从单个RGB图像中估算3D相互作用的手姿势对于理解人类行为至关重要。与大多数直接预测两只相互作用手的3D姿势的先前作品不同,我们建议分解具有挑战性的相互作用姿势估计任务并分别估算每只手的姿势。这样,就可以直接利用单手姿势估计系统的最新研究进度。然而,由于(1)严重的手部阻塞和(2)手的歧义性,手动姿势估计在相互作用的情况下非常具有挑战性。为了应对这两个挑战,我们提出了一种新型的手部划分和去除(HDR)框架,以执行手部斜切和脱离分散术的去除。我们还提出了第一个称为Amodal intredhand数据集(AIH)的大规模合成Amodal手数据集,以促进模型培训并促进相关研究的开发。实验表明,所提出的方法显着优于先前的最新相互作用姿势估计方法。代码和数据可在https://github.com/menghao666/hdr上找到。
translated by 谷歌翻译
2D姿势估计的现有作品主要集中在某个类别上,例如人,动物和车辆。但是,有许多应用程序方案需要检测看不见的对象类的姿势/关键点。在本文中,我们介绍了类别不稳定姿势估计(CAPE)的任务,该任务旨在创建一个姿势估计模型,能够检测仅给出一些具有关键点定义的样本的任何类别对象的姿势。为了实现这一目标,我们将姿势估计问题作为关键点匹配问题制定,并设计一个新颖的Cape框架,称为姿势匹配网络(POMNET)。提出了基于变压器的关键点交互模块(KIM),以捕获不同关键点之间的交互以及支持图像和查询图像之间的关系。我们还介绍了多类姿势(MP-100)数据集,该数据集是包含20K实例的100个对象类别的2D姿势数据集,并且经过精心设计用于开发CAPE算法。实验表明,我们的方法的表现优于其他基线方法。代码和数据可在https://github.com/luminxu/pose-for-venthing上找到。
translated by 谷歌翻译
Fairness has been taken as a critical metric in machine learning models, which is considered as an important component of trustworthy machine learning. In this paper, we focus on obtaining fairness for popular link prediction tasks, which are measured by dyadic fairness. A novel pre-processing methodology is proposed to establish dyadic fairness through data repairing based on optimal transport theory. With the well-established theoretical connection between the dyadic fairness for graph link prediction and a conditional distribution alignment problem, the dyadic repairing scheme can be equivalently transformed into a conditional distribution alignment problem. Furthermore, an optimal transport-based dyadic fairness algorithm called DyadicOT is obtained by efficiently solving the alignment problem, satisfying flexibility and unambiguity requirements. The proposed DyadicOT algorithm shows superior results in obtaining fairness compared to other fairness methods on two benchmark graph datasets.
translated by 谷歌翻译
时间行动提案生成(TAPG)是一个具有挑战性的任务,旨在在具有时间边界的未经监控视频中找到动作实例。为了评估提案的信任,现有的作品通常预测建议与地面真理之间的时间交叉联盟(TIOO)监督的提案的行动得分。在本文中,我们通过利用背景预测得分来限制提案的信心,创新地提出了一般的辅助背景约束理念,以进一步抑制低质量的建议。以这种方式,可以轻松地将背景约束概念用于现有的TAPG方法(例如,BMN,GTAD)。从这个角度来看,我们提出了背景约束网络(BCNet),以进一步利用行动和背景的丰富信息。具体地,我们介绍了一种动作 - 背景交互模块,用于可靠的置信度评估,它通过帧和剪辑级别的注意机制模拟了动作和背景之间的不一致。在两个流行的基准测试中进行了广泛的实验,即ActivityNet-1.3和Thumos14。结果表明,我们的方法优于最先进的方法。配备现有的Action Classifier,我们的方法还可以在时间动作本地化任务上实现显着性能。
translated by 谷歌翻译
介绍了一种名为VMagent的新型模拟器,以帮助RL研究人员更好地探索新方法,特别是对于虚拟机调度。VMagent由实用虚拟机(VM)调度任务的启发,并提供了一个有效的仿真平台,可以反映云计算的实际情况。从实际云计算结束了三种情况(衰落,恢复和扩展),对应于许多强化学习挑战(高维度和行动空间,高于寿命和终身需求)。VMagent为RL研究人员提供了灵活的配置,以设计考虑不同的问题特征的定制调度环境。从VM调度角度来看,VMagent还有助于探索更好的基于学习的调度解决方案。
translated by 谷歌翻译
遇到错误的损耗压缩正成为必不可少的技术,即当今科学项目的成功,并在模拟或仪器数据获取过程中产生了大量数据。它不仅可以显着减少数据大小,而且还可以基于用户指定的错误界限控制压缩错误。自动编码器(AE)模型已被广泛用于图像压缩中,但是很少有基于AE的压缩方法支持遇到错误的功能,这是科学应用所要求的。为了解决这个问题,我们使用卷积自动编码器探索以改善科学数据的错误损失压缩,并提供以下三个关键贡献。 (1)我们对各种自动编码器模型的特性进行了深入的研究,并根据SZ模型开发了基于错误的自动编码器的框架。 (2)我们在设计的基于AE的错误压缩框架中优化了主要阶段的压缩质量,并微调块大小和潜在尺寸,并优化了潜在向量的压缩效率。 (3)我们使用五个现实世界的科学数据集评估了我们提出的解决方案,并将其与其他六项相关作品进行了比较。实验表明,我们的解决方案在测试中的所有压缩机中表现出非常具有竞争性的压缩质量。从绝对的角度来看,与SZ2.1和ZFP相比,在高压比的情况下,它可以获得更好的压缩质量(压缩率和相同数据失真的100%〜800%提高)。
translated by 谷歌翻译
A further understanding of cause and effect within observational data is critical across many domains, such as economics, health care, public policy, web mining, online advertising, and marketing campaigns. Although significant advances have been made to overcome the challenges in causal effect estimation with observational data, such as missing counterfactual outcomes and selection bias between treatment and control groups, the existing methods mainly focus on source-specific and stationary observational data. Such learning strategies assume that all observational data are already available during the training phase and from only one source. This practical concern of accessibility is ubiquitous in various academic and industrial applications. That's what it boiled down to: in the era of big data, we face new challenges in causal inference with observational data, i.e., the extensibility for incrementally available observational data, the adaptability for extra domain adaptation problem except for the imbalance between treatment and control groups, and the accessibility for an enormous amount of data. In this position paper, we formally define the problem of continual treatment effect estimation, describe its research challenges, and then present possible solutions to this problem. Moreover, we will discuss future research directions on this topic.
translated by 谷歌翻译