AASM准则是为了有一种常用的方法,旨在标准化睡眠评分程序的数十年努力的结果。该指南涵盖了从技术/数字规格(例如,推荐的EEG推导)到相应的详细睡眠评分规则到年龄的几个方面。在睡眠评分自动化的背景下,与许多其他技术相比,深度学习表现出更好的性能。通常,临床专业知识和官方准则对于支持自动睡眠评分算法在解决任务时至关重要。在本文中,我们表明,基于深度学习的睡眠评分算法可能不需要充分利用临床知识或严格遵循AASM准则。具体而言,我们证明了U-Sleep是一种最先进的睡眠评分算法,即使使用临床非申请或非规定派生,也可以解决得分任务,即使无需利用有关有关的信息,也无需利用有关有关的信息。受试者的年代年龄。我们最终加强了一个众所周知的发现,即使用来自多个数据中心的数据始终导致与单个队列上的培训相比,可以使性能更好。确实,我们表明,即使增加了单个数据队列的大小和异质性,后者仍然有效。在我们的所有实验中,我们使用了来自13个不同临床研究的28528多个多摄影研究研究。
translated by 谷歌翻译
在水下活动期间获得的图像遭受了水的环境特性,例如浊度和衰减。这些现象会导致颜色失真,模糊和对比度减少。另外,不规则的环境光分布会导致色道不平衡和具有高强度像素的区域。最近的作品与水下图像增强有关,并基于深度学习方法,解决了缺乏生成合成基地真相的配对数据集。在本文中,我们提出了一种基于深度学习的水下图像增强的自我监督学习方法,不需要配对的数据集。提出的方法估计了水下图像中存在的降解。此外,自动编码器重建此图像,并使用估计的降解信息降解其输出图像。因此,该策略在训练阶段的损失函数中用降级版本代替了输出图像。此过程\ textIt {Misleads}学会补偿其他降解的神经网络。结果,重建的图像是输入图像的增强版本。此外,该算法还提出了一个注意模块,以减少通过颜色通道不平衡和异常区域在增强图像中产生的高强度区域。此外,提出的方法不需要基本真实。此外,仅使用真实的水下图像来训练神经网络,结果表明该方法在颜色保存,颜色铸造降低和对比度改进方面的有效性。
translated by 谷歌翻译
While the brain connectivity network can inform the understanding and diagnosis of developmental dyslexia, its cause-effect relationships have not yet enough been examined. Employing electroencephalography signals and band-limited white noise stimulus at 4.8 Hz (prosodic-syllabic frequency), we measure the phase Granger causalities among channels to identify differences between dyslexic learners and controls, thereby proposing a method to calculate directional connectivity. As causal relationships run in both directions, we explore three scenarios, namely channels' activity as sources, as sinks, and in total. Our proposed method can be used for both classification and exploratory analysis. In all scenarios, we find confirmation of the established right-lateralized Theta sampling network anomaly, in line with the temporal sampling framework's assumption of oscillatory differences in the Theta and Gamma bands. Further, we show that this anomaly primarily occurs in the causal relationships of channels acting as sinks, where it is significantly more pronounced than when only total activity is observed. In the sink scenario, our classifier obtains 0.84 and 0.88 accuracy and 0.87 and 0.93 AUC for the Theta and Gamma bands, respectively.
translated by 谷歌翻译
There are multiple scales of abstraction from which we can describe the same image, depending on whether we are focusing on fine-grained details or a more global attribute of the image. In brain mapping, learning to automatically parse images to build representations of both small-scale features (e.g., the presence of cells or blood vessels) and global properties of an image (e.g., which brain region the image comes from) is a crucial and open challenge. However, most existing datasets and benchmarks for neuroanatomy consider only a single downstream task at a time. To bridge this gap, we introduce a new dataset, annotations, and multiple downstream tasks that provide diverse ways to readout information about brain structure and architecture from the same image. Our multi-task neuroimaging benchmark (MTNeuro) is built on volumetric, micrometer-resolution X-ray microtomography images spanning a large thalamocortical section of mouse brain, encompassing multiple cortical and subcortical regions. We generated a number of different prediction challenges and evaluated several supervised and self-supervised models for brain-region prediction and pixel-level semantic segmentation of microstructures. Our experiments not only highlight the rich heterogeneity of this dataset, but also provide insights into how self-supervised approaches can be used to learn representations that capture multiple attributes of a single image and perform well on a variety of downstream tasks. Datasets, code, and pre-trained baseline models are provided at: https://mtneuro.github.io/ .
translated by 谷歌翻译
Explainability is a vibrant research topic in the artificial intelligence community, with growing interest across methods and domains. Much has been written about the topic, yet explainability still lacks shared terminology and a framework capable of providing structural soundness to explanations. In our work, we address these issues by proposing a novel definition of explanation that is a synthesis of what can be found in the literature. We recognize that explanations are not atomic but the product of evidence stemming from the model and its input-output and the human interpretation of this evidence. Furthermore, we fit explanations into the properties of faithfulness (i.e., the explanation being a true description of the model's decision-making) and plausibility (i.e., how much the explanation looks convincing to the user). Using our proposed theoretical framework simplifies how these properties are ope rationalized and provide new insight into common explanation methods that we analyze as case studies.
translated by 谷歌翻译
We present in this paper a family of generalized simultaneous perturbation stochastic approximation (G-SPSA) estimators that estimate the gradient of the objective using noisy function measurements, but where the number of function measurements and the form of the gradient estimator is guided by the desired estimator bias. In particular, estimators with more function measurements are seen to result in lower bias. We provide an analysis of convergence of the generalized SPSA algorithm, and point to possible future directions.
translated by 谷歌翻译
The intersection of ground reaction forces in a small, point-like area above the center of mass has been observed in computer simulation models and human walking experiments. This intersection point is often called a virtual pivot point (VPP). With the VPP observed so ubiquitously, it is commonly assumed to provide postural stability for bipedal walking. In this study, we challenge this assumption by questioning if walking without a VPP is possible. Deriving gaits with a neuromuscular reflex model through multi-stage optimization, we found stable walking patterns that show no signs of the VPP-typical intersection of ground reaction forces. We, therefore, conclude that a VPP is not necessary for upright, stable walking. The non-VPP gaits found are stable and successfully rejected step-down perturbations, which indicates that a VPP is not primarily responsible for locomotion robustness or postural stability. However, a collision-based analysis indicates that non-VPP gaits increased the potential for collisions between the vectors of the center of mass velocity and ground reaction forces during walking, suggesting an increased mechanical cost of transport. Although our computer simulation results have yet to be confirmed through experimental studies, they already strongly challenge the existing explanation of the VPP's function and provide an alternative explanation.
translated by 谷歌翻译
The proliferation of automatic faithfulness metrics for summarization has produced a need for benchmarks to evaluate them. While existing benchmarks measure the correlation with human judgements of faithfulness on model-generated summaries, they are insufficient for diagnosing whether metrics are: 1) consistent, i.e., decrease as errors are introduced into a summary, 2) effective on human-written texts, and 3) sensitive to different error types (as summaries can contain multiple errors). To address these needs, we present a benchmark of unfaithful minimal pairs (BUMP), a dataset of 889 human-written, minimally different summary pairs, where a single error (from an ontology of 7 types) is introduced to a summary from the CNN/DailyMail dataset to produce an unfaithful summary. We find BUMP complements existing benchmarks in a number of ways: 1) the summaries in BUMP are harder to discriminate and less probable under SOTA summarization models, 2) BUMP enables measuring the consistency of metrics, and reveals that the most discriminative metrics tend not to be the most consistent, 3) BUMP enables the measurement of metrics' performance on individual error types and highlights areas of weakness for future work.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
The lack of standardization is a prominent issue in magnetic resonance (MR) imaging. This often causes undesired contrast variations due to differences in hardware and acquisition parameters. In recent years, MR harmonization using image synthesis with disentanglement has been proposed to compensate for the undesired contrast variations. Despite the success of existing methods, we argue that three major improvements can be made. First, most existing methods are built upon the assumption that multi-contrast MR images of the same subject share the same anatomy. This assumption is questionable since different MR contrasts are specialized to highlight different anatomical features. Second, these methods often require a fixed set of MR contrasts for training (e.g., both Tw-weighted and T2-weighted images must be available), which limits their applicability. Third, existing methods generally are sensitive to imaging artifacts. In this paper, we present a novel approach, Harmonization with Attention-based Contrast, Anatomy, and Artifact Awareness (HACA3), to address these three issues. We first propose an anatomy fusion module that enables HACA3 to respect the anatomical differences between MR contrasts. HACA3 is also robust to imaging artifacts and can be trained and applied to any set of MR contrasts. Experiments show that HACA3 achieves state-of-the-art performance under multiple image quality metrics. We also demonstrate the applicability of HACA3 on downstream tasks with diverse MR datasets acquired from 21 sites with different field strengths, scanner platforms, and acquisition protocols.
translated by 谷歌翻译