布料的机器人操作的应用包括织物制造业到处理毯子和洗衣。布料操作对于机器人而言是挑战,这主要是由于它们的高度自由度,复杂的动力学和折叠或皱巴巴配置时的严重自我闭合。机器人操作的先前工作主要依赖于视觉传感器,这可能会对细粒度的操纵任务构成挑战,例如从一堆布上抓住所需数量的布料层。在本文中,我们建议将触觉传感用于布操作;我们将触觉传感器(Resin)连接到弗兰卡机器人的两个指尖之一,并训练分类器,以确定机器人是否正在抓住特定数量的布料层。在测试时间实验中,机器人使用此分类器作为其政策的一部分,使用触觉反馈来掌握一两个布层,以确定合适的握把。实验结果超过180次物理试验表明,与使用图像分类器的方法相比,所提出的方法优于不使用触觉反馈并具有更好地看不见布的基准。代码,数据和视频可在https://sites.google.com/view/reskin-cloth上找到。
translated by 谷歌翻译
最近的工作表明,2臂“ Fling”运动对于服装平滑可能是有效的。我们考虑单臂弹性运动。与几乎不需要机器人轨迹参数调整的2臂fling运动不同,单臂fling运动对轨迹参数很敏感。我们考虑一个单一的6多机器人臂,该机器人臂学习跨越轨迹以实现高衣覆盖率。给定服装抓握点,机器人在物理实验中探索了不同的参数化fling轨迹。为了提高学习效率,我们提出了一种粗到精细的学习方法,该方法首先使用多军匪徒(MAB)框架有效地找到候选动作,然后通过连续优化方法来完善。此外,我们提出了基于Fling Fall结果不确定性的新颖培训和执行时间停止标准。与基线相比,我们表明所提出的方法显着加速学习。此外,由于通过自学人员收集的类似服装的先前经验,新服装的MAB学习时间最多减少了87%。我们评估了6种服装类型:毛巾,T恤,长袖衬衫,礼服,汗衫和牛仔裤。结果表明,使用先前的经验,机器人需要30分钟以下的时间才能为达到60-94%覆盖率的新型服装学习一项动作。
translated by 谷歌翻译
使用单个参数化动态动作操纵可变形物体对蝇钓,宽毯和播放洗牌板等任务非常有用。此类任务作为输入所需的最终状态并输出一个参数化的开环动态机器人动作,它向最终状态产生轨迹。这对于具有涉及摩擦力的复杂动态的长地平轨迹尤其具有挑战性。本文探讨了平面机器人铸造的任务(PRC):其中握住电缆一端的机器人手腕的一个平面运动使另一端朝向所需的目标滑过平面。 PRC允许电缆达到机器人工作区以外的点,并在家庭,仓库和工厂中具有电缆管理的应用。为了有效地学习给定电缆的PRC策略,我们提出了Real2Sim2Real,一个自动收集物理轨迹示例的自我监督框架,以使用差分演进调谐动态模拟器的参数,生成许多模拟示例,然后使用加权学习策略模拟和物理数据的组合。我们使用三种模拟器,ISAAC健身房分段,ISAAC健身房 - 混合动力和Pybullet,两个功能近似器,高斯工艺和神经网络(NNS),以及具有不同刚度,扭转和摩擦的三个电缆。结果每条电缆的16个举出的测试目标表明,使用ISAAC健身房分段的NN PRC策略达到中位误差距离(电缆长度的百分比),范围为8%至14%,表现优于真实或仅培训的基线和政策。只有模拟的例子。 https://tinyurl.com/robotcast可以使用代码,数据和视频。
translated by 谷歌翻译
The performance of inertial navigation systems is largely dependent on the stable flow of external measurements and information to guarantee continuous filter updates and bind the inertial solution drift. Platforms in different operational environments may be prevented at some point from receiving external measurements, thus exposing their navigation solution to drift. Over the years, a wide variety of works have been proposed to overcome this shortcoming, by exploiting knowledge of the system current conditions and turning it into an applicable source of information to update the navigation filter. This paper aims to provide an extensive survey of information aided navigation, broadly classified into direct, indirect, and model aiding. Each approach is described by the notable works that implemented its concept, use cases, relevant state updates, and their corresponding measurement models. By matching the appropriate constraint to a given scenario, one will be able to improve the navigation solution accuracy, compensate for the lost information, and uncover certain internal states, that would otherwise remain unobservable.
translated by 谷歌翻译
We consider infinite horizon Markov decision processes (MDPs) with fast-slow structure, meaning that certain parts of the state space move "fast" (and in a sense, are more influential) while other parts transition more "slowly." Such structure is common in real-world problems where sequential decisions need to be made at high frequencies, yet information that varies at a slower timescale also influences the optimal policy. Examples include: (1) service allocation for a multi-class queue with (slowly varying) stochastic costs, (2) a restless multi-armed bandit with an environmental state, and (3) energy demand response, where both day-ahead and real-time prices play a role in the firm's revenue. Models that fully capture these problems often result in MDPs with large state spaces and large effective time horizons (due to frequent decisions), rendering them computationally intractable. We propose an approximate dynamic programming algorithmic framework based on the idea of "freezing" the slow states, solving a set of simpler finite-horizon MDPs (the lower-level MDPs), and applying value iteration (VI) to an auxiliary MDP that transitions on a slower timescale (the upper-level MDP). We also extend the technique to a function approximation setting, where a feature-based linear architecture is used. On the theoretical side, we analyze the regret incurred by each variant of our frozen-state approach. Finally, we give empirical evidence that the frozen-state approach generates effective policies using just a fraction of the computational cost, while illustrating that simply omitting slow states from the decision modeling is often not a viable heuristic.
translated by 谷歌翻译
In the present work we propose an unsupervised ensemble method consisting of oblique trees that can address the task of auto-encoding, namely Oblique Forest AutoEncoders (briefly OF-AE). Our method is a natural extension of the eForest encoder introduced in [1]. More precisely, by employing oblique splits consisting in multivariate linear combination of features instead of the axis-parallel ones, we will devise an auto-encoder method through the computation of a sparse solution of a set of linear inequalities consisting of feature values constraints. The code for reproducing our results is available at https://github.com/CDAlecsa/Oblique-Forest-AutoEncoders.
translated by 谷歌翻译
When robots learn reward functions using high capacity models that take raw state directly as input, they need to both learn a representation for what matters in the task -- the task ``features" -- as well as how to combine these features into a single objective. If they try to do both at once from input designed to teach the full reward function, it is easy to end up with a representation that contains spurious correlations in the data, which fails to generalize to new settings. Instead, our ultimate goal is to enable robots to identify and isolate the causal features that people actually care about and use when they represent states and behavior. Our idea is that we can tune into this representation by asking users what behaviors they consider similar: behaviors will be similar if the features that matter are similar, even if low-level behavior is different; conversely, behaviors will be different if even one of the features that matter differs. This, in turn, is what enables the robot to disambiguate between what needs to go into the representation versus what is spurious, as well as what aspects of behavior can be compressed together versus not. The notion of learning representations based on similarity has a nice parallel in contrastive learning, a self-supervised representation learning technique that maps visually similar data points to similar embeddings, where similarity is defined by a designer through data augmentation heuristics. By contrast, in order to learn the representations that people use, so we can learn their preferences and objectives, we use their definition of similarity. In simulation as well as in a user study, we show that learning through such similarity queries leads to representations that, while far from perfect, are indeed more generalizable than self-supervised and task-input alternatives.
translated by 谷歌翻译
While the capabilities of autonomous systems have been steadily improving in recent years, these systems still struggle to rapidly explore previously unknown environments without the aid of GPS-assisted navigation. The DARPA Subterranean (SubT) Challenge aimed to fast track the development of autonomous exploration systems by evaluating their performance in real-world underground search-and-rescue scenarios. Subterranean environments present a plethora of challenges for robotic systems, such as limited communications, complex topology, visually-degraded sensing, and harsh terrain. The presented solution enables long-term autonomy with minimal human supervision by combining a powerful and independent single-agent autonomy stack, with higher level mission management operating over a flexible mesh network. The autonomy suite deployed on quadruped and wheeled robots was fully independent, freeing the human supervision to loosely supervise the mission and make high-impact strategic decisions. We also discuss lessons learned from fielding our system at the SubT Final Event, relating to vehicle versatility, system adaptability, and re-configurable communications.
translated by 谷歌翻译
Deep learning models are known to put the privacy of their training data at risk, which poses challenges for their safe and ethical release to the public. Differentially private stochastic gradient descent is the de facto standard for training neural networks without leaking sensitive information about the training data. However, applying it to models for graph-structured data poses a novel challenge: unlike with i.i.d. data, sensitive information about a node in a graph cannot only leak through its gradients, but also through the gradients of all nodes within a larger neighborhood. In practice, this limits privacy-preserving deep learning on graphs to very shallow graph neural networks. We propose to solve this issue by training graph neural networks on disjoint subgraphs of a given training graph. We develop three random-walk-based methods for generating such disjoint subgraphs and perform a careful analysis of the data-generating distributions to provide strong privacy guarantees. Through extensive experiments, we show that our method greatly outperforms the state-of-the-art baseline on three large graphs, and matches or outperforms it on four smaller ones.
translated by 谷歌翻译
Machine learning models are typically evaluated by computing similarity with reference annotations and trained by maximizing similarity with such. Especially in the bio-medical domain, annotations are subjective and suffer from low inter- and intra-rater reliability. Since annotations only reflect the annotation entity's interpretation of the real world, this can lead to sub-optimal predictions even though the model achieves high similarity scores. Here, the theoretical concept of Peak Ground Truth (PGT) is introduced. PGT marks the point beyond which an increase in similarity with the reference annotation stops translating to better Real World Model Performance (RWMP). Additionally, a quantitative technique to approximate PGT by computing inter- and intra-rater reliability is proposed. Finally, three categories of PGT-aware strategies to evaluate and improve model performance are reviewed.
translated by 谷歌翻译