我们提出了一种方法,用于在主动电分布网络中考虑使用脆弱节点识别的最佳DERS分配,并将这些节点命名为关键节点。这些关键节点的功率变化将显着影响其他链接节点的运行,因此这些节点适合使用,并且认为最适合DERS放置。我们在标准的IEEE-123测试馈线系统中证明了我们的方法评估。最初,我们使用图理论将分布系统划分为最佳微电网网络。使用图神经网络体系结构对分区进行了验证,以适当形成微电网。此外,使用有效的可测量分析(例如Granger因果关系),我们确定了分区的微电网中的关键节点和在这些节点上的DERS放置,从而提高了网络可靠性和弹性。此外,为了验证系统性能和能量弹性,我们计算了微电网网络的渗透阈值,该网络指示了在这些关键节点上掺入DER后系统弹性。这项提出的有关首先的方法可确保通过分布网络中数据驱动的分析方法来确定有效的微电网分配,关键节点的识别,最佳DERS分配和系统弹性评估。
translated by 谷歌翻译
分层增强学习中的选项框架将整体目标分解为选项或更简单的任务和相关策略的组合,从而可以在动作领域进行抽象。理想情况下,可以在不同的高级目标中重复使用这些选择;确实,这种重复使用对于实现可以有效利用其先前经验的持续学习代理的愿景是必要的。先前的方法仅提出了将预科选项转移到新任务设置的有限形式。我们提出了一种新颖的选项索引方法,用于分层学习(OI-HRL),在该方法中,我们学习选项与环境中存在的项目之间的亲和力功能。这使我们能够通过将目标指导的学习仅限于与手头的任务相关的那些选项,在测试时间零弹性概括中有效地重用大量的经过预告片的选项库。我们开发了一个元训练循环,该循环通过结合有关检索期权与高级目标的相关性的反馈来了解一系列HRL问题的选项和环境的表示。我们在两个模拟设置中评估了OI -HRL -Craftworld和AI2THOR环境 - 并表明我们与Oracular Baseline达到了性能竞争,并且比基线的实质性取得了可观的增长,该基线具有可用于学习层次结构策略的整个选项库。
translated by 谷歌翻译
读取文本读取序列的确定是对记录理解的基础。在文本组织成一系列行和垂直对准的页面中,可以轻松解决此问题,并运行页面的高度(生成可以从左到右读取的多列)。我们展示了一种情况 - 目录页面解析问题 - 以不规则,视觉组织的二维格式在页面上呈现信息。目录页面在金融招股说明书中相当常见,并携带有关组织,其地址和关系的信息,这是客户在车内客户端的关键。有趣的是,目录页有时有分层结构,激励需要将读取序列概括为读取树。我们向识别目录页面和构建读取树的问题提供解决方案,使用(学习)文本段和自下而上的(向左,左上,顶部顶部)遍历的段的横向。该解决方案是支持从客户端船上文件自动提取组织,地址和关系信息的生产服务的关键部分。
translated by 谷歌翻译
In multi-agent systems with large number of agents, typically the contribution of each agent to the value of other agents is minimal (e.g., aggregation systems such as Uber, Deliveroo). In this paper, we consider such multi-agent systems where each agent is self-interested and takes a sequence of decisions and represent them as a Stochastic Non-atomic Congestion Game (SNCG). We derive key properties for equilibrium solutions in SNCG model with non-atomic and also nearly non-atomic agents. With those key equilibrium properties, we provide a novel Multi-Agent Reinforcement Learning (MARL) mechanism that minimizes variance across values of agents in the same state. To demonstrate the utility of this new mechanism, we provide detailed results on a real-world taxi dataset and also a generic simulator for aggregation systems. We show that our approach reduces the variance in revenues earned by taxi drivers, while still providing higher joint revenues than leading approaches.
translated by 谷歌翻译
The primary goal of this work is to study the effectiveness of an unsupervised domain adaptation approach for various applications such as binary classification and anomaly detection in the context of Alzheimer's disease (AD) detection for the OASIS datasets. We also explore image reconstruction and image synthesis for analyzing and generating 3D structural MRI data to establish performance benchmarks for anomaly detection. We successfully demonstrate that domain adaptation improves the performance of AD detection when implemented in both supervised and unsupervised settings. Additionally, the proposed methodology achieves state-of-the-art performance for binary classification on the OASIS-1 dataset.
translated by 谷歌翻译
Document summarization aims to create a precise and coherent summary of a text document. Many deep learning summarization models are developed mainly for English, often requiring a large training corpus and efficient pre-trained language models and tools. However, English summarization models for low-resource Indian languages are often limited by rich morphological variation, syntax, and semantic differences. In this paper, we propose GAE-ISumm, an unsupervised Indic summarization model that extracts summaries from text documents. In particular, our proposed model, GAE-ISumm uses Graph Autoencoder (GAE) to learn text representations and a document summary jointly. We also provide a manually-annotated Telugu summarization dataset TELSUM, to experiment with our model GAE-ISumm. Further, we experiment with the most publicly available Indian language summarization datasets to investigate the effectiveness of GAE-ISumm on other Indian languages. Our experiments of GAE-ISumm in seven languages make the following observations: (i) it is competitive or better than state-of-the-art results on all datasets, (ii) it reports benchmark results on TELSUM, and (iii) the inclusion of positional and cluster information in the proposed model improved the performance of summaries.
translated by 谷歌翻译
In this paper, we propose Adam-Hash: an adaptive and dynamic multi-resolution hashing data-structure for fast pairwise summation estimation. Given a data-set $X \subset \mathbb{R}^d$, a binary function $f:\mathbb{R}^d\times \mathbb{R}^d\to \mathbb{R}$, and a point $y \in \mathbb{R}^d$, the Pairwise Summation Estimate $\mathrm{PSE}_X(y) := \frac{1}{|X|} \sum_{x \in X} f(x,y)$. For any given data-set $X$, we need to design a data-structure such that given any query point $y \in \mathbb{R}^d$, the data-structure approximately estimates $\mathrm{PSE}_X(y)$ in time that is sub-linear in $|X|$. Prior works on this problem have focused exclusively on the case where the data-set is static, and the queries are independent. In this paper, we design a hashing-based PSE data-structure which works for the more practical \textit{dynamic} setting in which insertions, deletions, and replacements of points are allowed. Moreover, our proposed Adam-Hash is also robust to adaptive PSE queries, where an adversary can choose query $q_j \in \mathbb{R}^d$ depending on the output from previous queries $q_1, q_2, \dots, q_{j-1}$.
translated by 谷歌翻译
The emergence of large pretrained models has enabled language models to achieve superior performance in common NLP tasks, including language modeling and question answering, compared to previous static word representation methods. Augmenting these models with a retriever to retrieve the related text and documents as supporting information has shown promise in effectively solving NLP problems in a more interpretable way given that the additional knowledge is injected explicitly rather than being captured in the models' parameters. In spite of the recent progress, our analysis on retriever-augmented language models shows that this class of language models still lack reasoning over the retrieved documents. In this paper, we study the strengths and weaknesses of different retriever-augmented language models such as REALM, kNN-LM, FiD, ATLAS, and Flan-T5 in reasoning over the selected documents in different tasks. In particular, we analyze the reasoning failures of each of these models and study how the models' failures in reasoning are rooted in the retriever module as well as the language model.
translated by 谷歌翻译
This paper proposes a perception and path planning pipeline for autonomous racing in an unknown bounded course. The pipeline was initially created for the 2021 evGrandPrix autonomous division and was further improved for the 2022 event, both of which resulting in first place finishes. Using a simple LiDAR-based perception pipeline feeding into an occupancy grid based expansion algorithm, we determine a goal point to drive. This pipeline successfully achieved reliable and consistent laps in addition with occupancy grid algorithm to know the ways around a cone-defined track with an averaging speeds of 6.85 m/s over a distance 434.2 meters for a total lap time of 63.4 seconds.
translated by 谷歌翻译
Language models have been shown to be very effective in predicting brain recordings of subjects experiencing complex language stimuli. For a deeper understanding of this alignment, it is important to understand the alignment between the detailed processing of linguistic information by the human brain versus language models. In NLP, linguistic probing tasks have revealed a hierarchy of information processing in neural language models that progresses from simple to complex with an increase in depth. On the other hand, in neuroscience, the strongest alignment with high-level language brain regions has consistently been observed in the middle layers. These findings leave an open question as to what linguistic information actually underlies the observed alignment between brains and language models. We investigate this question via a direct approach, in which we eliminate information related to specific linguistic properties in the language model representations and observe how this intervention affects the alignment with fMRI brain recordings obtained while participants listened to a story. We investigate a range of linguistic properties (surface, syntactic and semantic) and find that the elimination of each one results in a significant decrease in brain alignment across all layers of a language model. These findings provide direct evidence for the role of specific linguistic information in the alignment between brain and language models, and opens new avenues for mapping the joint information processing in both systems.
translated by 谷歌翻译