While deep learning methods hitherto have achieved considerable success in medical image segmentation, they are still hampered by two limitations: (i) reliance on large-scale well-labeled datasets, which are difficult to curate due to the expert-driven and time-consuming nature of pixel-level annotations in clinical practices, and (ii) failure to generalize from one domain to another, especially when the target domain is a different modality with severe domain shifts. Recent unsupervised domain adaptation~(UDA) techniques leverage abundant labeled source data together with unlabeled target data to reduce the domain gap, but these methods degrade significantly with limited source annotations. In this study, we address this underexplored UDA problem, investigating a challenging but valuable realistic scenario, where the source domain not only exhibits domain shift~w.r.t. the target domain but also suffers from label scarcity. In this regard, we propose a novel and generic framework called ``Label-Efficient Unsupervised Domain Adaptation"~(LE-UDA). In LE-UDA, we construct self-ensembling consistency for knowledge transfer between both domains, as well as a self-ensembling adversarial learning module to achieve better feature alignment for UDA. To assess the effectiveness of our method, we conduct extensive experiments on two different tasks for cross-modality segmentation between MRI and CT images. Experimental results demonstrate that the proposed LE-UDA can efficiently leverage limited source labels to improve cross-domain segmentation performance, outperforming state-of-the-art UDA approaches in the literature. Code is available at: https://github.com/jacobzhaoziyuan/LE-UDA.
translated by 谷歌翻译
Artificial Intelligence (AI) is having a tremendous impact across most areas of science. Applications of AI in healthcare have the potential to improve our ability to detect, diagnose, prognose, and intervene on human disease. For AI models to be used clinically, they need to be made safe, reproducible and robust, and the underlying software framework must be aware of the particularities (e.g. geometry, physiology, physics) of medical data being processed. This work introduces MONAI, a freely available, community-supported, and consortium-led PyTorch-based framework for deep learning in healthcare. MONAI extends PyTorch to support medical data, with a particular focus on imaging, and provide purpose-specific AI model architectures, transformations and utilities that streamline the development and deployment of medical AI models. MONAI follows best practices for software-development, providing an easy-to-use, robust, well-documented, and well-tested software framework. MONAI preserves the simple, additive, and compositional approach of its underlying PyTorch libraries. MONAI is being used by and receiving contributions from research, clinical and industrial teams from around the world, who are pursuing applications spanning nearly every aspect of healthcare.
translated by 谷歌翻译
Kidney transplantation is the preferred treatment for people suffering from end-stage renal disease. Successful kidney transplants still fail over time, known as graft failure; however, the time to graft failure, or graft survival time, can vary significantly between different recipients. A significant biological factor affecting graft survival times is the compatibility between the human leukocyte antigens (HLAs) of the donor and recipient. We propose to model HLA compatibility using a network, where the nodes denote different HLAs of the donor and recipient, and edge weights denote compatibilities of the HLAs, which can be positive or negative. The network is indirectly observed, as the edge weights are estimated from transplant outcomes rather than directly observed. We propose a latent space model for such indirectly-observed weighted and signed networks. We demonstrate that our latent space model can not only result in more accurate estimates of HLA compatibilities, but can also be incorporated into survival analysis models to improve accuracy for the downstream task of predicting graft survival times.
translated by 谷歌翻译
公平性是一个标准,重点是评估不同人口组的算法性能,它引起了自然语言处理,推荐系统和面部识别的关注。由于医学图像样本中有很多人口统计学属性,因此了解公平的概念,熟悉不公平的缓解技术,评估算法的公平程度并认识到医疗图像分析(媒体)中的公平问题中的挑战很重要。在本文中,我们首先给出了公平性的全面和精确的定义,然后通过在媒体中引入当前使用的技术中使用的技术。之后,我们列出了包含人口统计属性的公共医疗图像数据集,以促进公平研究并总结有关媒体公平性的当前算法。为了帮助更好地理解公平性,并引起人们对媒体中与公平性有关的问题的关注,进行了实验,比较公平性和数据失衡之间的差异,验证各种媒体任务中不公平的存在,尤其是在分类,细分和检测以及评估不公平缓解算法的有效性。最后,我们以媒体公平性的机会和挑战得出结论。
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译
网络和时间点过程是建模各个领域中复杂动态关系数据的基本构件。我们建议使用节点的潜在空间表示形式,提出了潜在空间鹰队(LSH)模型,这是一种连续时间的关系网络的新型生成模型。我们使用共同令人兴奋的霍克斯工艺在节点之间建模关系事件,其基线强度取决于潜在空间中的节点与发件人和接收器特定效果之间的距离。我们证明,我们提出的LSH模型可以复制在包括互惠和传递性在内的真实时间网络中观察到的许多功能,同时还可以实现卓越的预测准确性并提供比现有模型更明显的拟合。
translated by 谷歌翻译
随机块模型(SBM)是用于网络数据最广泛使用的生成模型之一。鉴于块或社区成员身份,许多连续的动态网络模型都建立在与SBM相同的假设上:有条件地有条件地独立在真实网络中观察到。我们提出了多元社区霍克斯(Mulch)模型,这是一种非常灵活的基于社区的模型,用于连续时间网络,使用结构化的多元霍克斯工艺在节点对之间引入依赖性。我们使用基于光谱聚类和基于可能性的本地改进程序拟合模型。我们发现,我们所提出的覆盖模型比在预测和生成任务中都比现有模型更准确。
translated by 谷歌翻译
肾脏移植可以显着增强患有末期肾脏疾病的人的生活水平。影响移植物生存时间的一个重要因素(移植失败的时间和患者需要另一个移植的时间)是肾移植的是供体和受体之间人类白细胞抗原(HLA)的兼容性。在本文中,我们提出了4种新的与生物学的特征表示,以将HLA信息纳入基于机器学习的生存分析算法中。我们在超过100,000次移植的数据库上评估了我们提出的HLA特征表示,并发现它们将预测准确性提高了约1%,在患者水平上适度,但在社会水平上可能具有重要意义。准确预测生存时间可以改善移植生存结果,从而更好地分配捐助者向接受者分配,并减少由于移植失败而与匹配不佳的捐助者造成的重新移植数量。
translated by 谷歌翻译
Learning efficient and interpretable policies has been a challenging task in reinforcement learning (RL), particularly in the visual RL setting with complex scenes. While neural networks have achieved competitive performance, the resulting policies are often over-parameterized black boxes that are difficult to interpret and deploy efficiently. More recent symbolic RL frameworks have shown that high-level domain-specific programming logic can be designed to handle both policy learning and symbolic planning. However, these approaches rely on coded primitives with little feature learning, and when applied to high-dimensional visual scenes, they can suffer from scalability issues and perform poorly when images have complex object interactions. To address these challenges, we propose \textit{Differentiable Symbolic Expression Search} (DiffSES), a novel symbolic learning approach that discovers discrete symbolic policies using partially differentiable optimization. By using object-level abstractions instead of raw pixel-level inputs, DiffSES is able to leverage the simplicity and scalability advantages of symbolic expressions, while also incorporating the strengths of neural networks for feature learning and optimization. Our experiments demonstrate that DiffSES is able to generate symbolic policies that are simpler and more and scalable than state-of-the-art symbolic RL methods, with a reduced amount of symbolic prior knowledge.
translated by 谷歌翻译
Recent years have seen a proliferation of research on adversarial machine learning. Numerous papers demonstrate powerful algorithmic attacks against a wide variety of machine learning (ML) models, and numerous other papers propose defenses that can withstand most attacks. However, abundant real-world evidence suggests that actual attackers use simple tactics to subvert ML-driven systems, and as a result security practitioners have not prioritized adversarial ML defenses. Motivated by the apparent gap between researchers and practitioners, this position paper aims to bridge the two domains. We first present three real-world case studies from which we can glean practical insights unknown or neglected in research. Next we analyze all adversarial ML papers recently published in top security conferences, highlighting positive trends and blind spots. Finally, we state positions on precise and cost-driven threat modeling, collaboration between industry and academia, and reproducible research. We believe that our positions, if adopted, will increase the real-world impact of future endeavours in adversarial ML, bringing both researchers and practitioners closer to their shared goal of improving the security of ML systems.
translated by 谷歌翻译