在建立工程基础设施的预测模型时,提出了人群级分析来解决数据稀疏性。利用可解释的层次贝叶斯方法和操作车队数据,域专业知识是自然编码(并适当共享)在不同的子组之间,代表(i)使用型,(ii)组件或(iii)操作条件。具体而言,利用领域专业知识来通过假设(和先前的分布)来限制模型,从而使该方法可以自动共享相似资产之间的信息,从而改善了对风电场中卡车机队和权力预测的生存分析。在每个资产管理示例中,在合并的推理中学习了一组相关的功能,以学习人口模型。当允许子型在层次结构中的不同级别共享相关信息时,参数估计得到改善。反过来,数据不完整的组会自动从数据丰富的组中借用统计强度。统计相关性使知识转移能够通过贝叶斯转移学习,并且可以检查相关性,以告知哪些资产共享有关哪些效果(即参数)的信息。两种案例研究的成功都证明了实践基础设施监测的广泛适用性,因为该方法自然适应了不同原位示例的可解释的车队模型。
translated by 谷歌翻译
功率曲线捕获风速与特定风力涡轮机的输出功率之间的关系。这种功能的准确回归模型在监控,维护,设计和规划方面证明是有用的。然而,在实践中,测量并不总是对应于理想曲线:电源缩减将显示为(附加)功能组件。这种多值关系不能通过常规回归建模,并且在预处理期间通常去除相关数据。目前的工作表明了一种替代方法,可以在缩减电力数据中推断多值关系。使用基于人群的方法,将概率回归模型的重叠混合应用于从操作风电场内的涡轮机记录的信号。示出了模型,以便在整个人口中提供精确的实际功率数据表示。
translated by 谷歌翻译
制定和实施结构健康监测系统的主要动机是获得有关制定结构和维护结构和维护的能力的前景。遗憾的是,对于对应于感兴趣结构的健康状态信息的测量数据的描述性标签很少在监控系统之前可用。该问题限制了传统监督和无监督方法对机器学习的适用性,以便在统计分类机制下进行决策支持SHM系统。本文提出了一种基于风险的主动学习的制定,其中类标签信息的查询被每个初期数据点的所述信息的预期值引导。当应用于结构性健康监测时,可以将类标签查询映射到兴趣结构的检查中,以确定其健康状态。在本文中,通过代表数值示例解释和可视化基于风险的主动学习过程,随后应用于Z24桥梁基准。案例研究结果表明,通过统计分类器的基于风险的主动学习可以改善决策者的性能,从而考虑决策过程本身。
translated by 谷歌翻译
While the brain connectivity network can inform the understanding and diagnosis of developmental dyslexia, its cause-effect relationships have not yet enough been examined. Employing electroencephalography signals and band-limited white noise stimulus at 4.8 Hz (prosodic-syllabic frequency), we measure the phase Granger causalities among channels to identify differences between dyslexic learners and controls, thereby proposing a method to calculate directional connectivity. As causal relationships run in both directions, we explore three scenarios, namely channels' activity as sources, as sinks, and in total. Our proposed method can be used for both classification and exploratory analysis. In all scenarios, we find confirmation of the established right-lateralized Theta sampling network anomaly, in line with the temporal sampling framework's assumption of oscillatory differences in the Theta and Gamma bands. Further, we show that this anomaly primarily occurs in the causal relationships of channels acting as sinks, where it is significantly more pronounced than when only total activity is observed. In the sink scenario, our classifier obtains 0.84 and 0.88 accuracy and 0.87 and 0.93 AUC for the Theta and Gamma bands, respectively.
translated by 谷歌翻译
There are multiple scales of abstraction from which we can describe the same image, depending on whether we are focusing on fine-grained details or a more global attribute of the image. In brain mapping, learning to automatically parse images to build representations of both small-scale features (e.g., the presence of cells or blood vessels) and global properties of an image (e.g., which brain region the image comes from) is a crucial and open challenge. However, most existing datasets and benchmarks for neuroanatomy consider only a single downstream task at a time. To bridge this gap, we introduce a new dataset, annotations, and multiple downstream tasks that provide diverse ways to readout information about brain structure and architecture from the same image. Our multi-task neuroimaging benchmark (MTNeuro) is built on volumetric, micrometer-resolution X-ray microtomography images spanning a large thalamocortical section of mouse brain, encompassing multiple cortical and subcortical regions. We generated a number of different prediction challenges and evaluated several supervised and self-supervised models for brain-region prediction and pixel-level semantic segmentation of microstructures. Our experiments not only highlight the rich heterogeneity of this dataset, but also provide insights into how self-supervised approaches can be used to learn representations that capture multiple attributes of a single image and perform well on a variety of downstream tasks. Datasets, code, and pre-trained baseline models are provided at: https://mtneuro.github.io/ .
translated by 谷歌翻译
We present in this paper a family of generalized simultaneous perturbation stochastic approximation (G-SPSA) estimators that estimate the gradient of the objective using noisy function measurements, but where the number of function measurements and the form of the gradient estimator is guided by the desired estimator bias. In particular, estimators with more function measurements are seen to result in lower bias. We provide an analysis of convergence of the generalized SPSA algorithm, and point to possible future directions.
translated by 谷歌翻译
The intersection of ground reaction forces in a small, point-like area above the center of mass has been observed in computer simulation models and human walking experiments. This intersection point is often called a virtual pivot point (VPP). With the VPP observed so ubiquitously, it is commonly assumed to provide postural stability for bipedal walking. In this study, we challenge this assumption by questioning if walking without a VPP is possible. Deriving gaits with a neuromuscular reflex model through multi-stage optimization, we found stable walking patterns that show no signs of the VPP-typical intersection of ground reaction forces. We, therefore, conclude that a VPP is not necessary for upright, stable walking. The non-VPP gaits found are stable and successfully rejected step-down perturbations, which indicates that a VPP is not primarily responsible for locomotion robustness or postural stability. However, a collision-based analysis indicates that non-VPP gaits increased the potential for collisions between the vectors of the center of mass velocity and ground reaction forces during walking, suggesting an increased mechanical cost of transport. Although our computer simulation results have yet to be confirmed through experimental studies, they already strongly challenge the existing explanation of the VPP's function and provide an alternative explanation.
translated by 谷歌翻译
The proliferation of automatic faithfulness metrics for summarization has produced a need for benchmarks to evaluate them. While existing benchmarks measure the correlation with human judgements of faithfulness on model-generated summaries, they are insufficient for diagnosing whether metrics are: 1) consistent, i.e., decrease as errors are introduced into a summary, 2) effective on human-written texts, and 3) sensitive to different error types (as summaries can contain multiple errors). To address these needs, we present a benchmark of unfaithful minimal pairs (BUMP), a dataset of 889 human-written, minimally different summary pairs, where a single error (from an ontology of 7 types) is introduced to a summary from the CNN/DailyMail dataset to produce an unfaithful summary. We find BUMP complements existing benchmarks in a number of ways: 1) the summaries in BUMP are harder to discriminate and less probable under SOTA summarization models, 2) BUMP enables measuring the consistency of metrics, and reveals that the most discriminative metrics tend not to be the most consistent, 3) BUMP enables the measurement of metrics' performance on individual error types and highlights areas of weakness for future work.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
The lack of standardization is a prominent issue in magnetic resonance (MR) imaging. This often causes undesired contrast variations due to differences in hardware and acquisition parameters. In recent years, MR harmonization using image synthesis with disentanglement has been proposed to compensate for the undesired contrast variations. Despite the success of existing methods, we argue that three major improvements can be made. First, most existing methods are built upon the assumption that multi-contrast MR images of the same subject share the same anatomy. This assumption is questionable since different MR contrasts are specialized to highlight different anatomical features. Second, these methods often require a fixed set of MR contrasts for training (e.g., both Tw-weighted and T2-weighted images must be available), which limits their applicability. Third, existing methods generally are sensitive to imaging artifacts. In this paper, we present a novel approach, Harmonization with Attention-based Contrast, Anatomy, and Artifact Awareness (HACA3), to address these three issues. We first propose an anatomy fusion module that enables HACA3 to respect the anatomical differences between MR contrasts. HACA3 is also robust to imaging artifacts and can be trained and applied to any set of MR contrasts. Experiments show that HACA3 achieves state-of-the-art performance under multiple image quality metrics. We also demonstrate the applicability of HACA3 on downstream tasks with diverse MR datasets acquired from 21 sites with different field strengths, scanner platforms, and acquisition protocols.
translated by 谷歌翻译