Light guide plates are essential optical components widely used in a diverse range of applications ranging from medical lighting fixtures to back-lit TV displays. In this work, we introduce a fully-integrated, high-throughput, high-performance deep learning-driven workflow for light guide plate surface visual quality inspection (VQI) tailored for real-world manufacturing environments. To enable automated VQI on the edge computing within the fully-integrated VQI system, a highly compact deep anti-aliased attention condenser neural network (which we name LightDefectNet) tailored specifically for light guide plate surface defect detection in resource-constrained scenarios was created via machine-driven design exploration with computational and "best-practices" constraints as well as L_1 paired classification discrepancy loss. Experiments show that LightDetectNet achieves a detection accuracy of ~98.2% on the LGPSDD benchmark while having just 770K parameters (~33X and ~6.9X lower than ResNet-50 and EfficientNet-B0, respectively) and ~93M FLOPs (~88X and ~8.4X lower than ResNet-50 and EfficientNet-B0, respectively) and ~8.8X faster inference speed than EfficientNet-B0 on an embedded ARM processor. As such, the proposed deep learning-driven workflow, integrated with the aforementioned LightDefectNet neural network, is highly suited for high-throughput, high-performance light plate surface VQI within real-world manufacturing environments.
translated by 谷歌翻译
Creating high-performance generalizable deep neural networks for phytoplankton monitoring requires utilizing large-scale data coming from diverse global water sources. A major challenge to training such networks lies in data privacy, where data collected at different facilities are often restricted from being transferred to a centralized location. A promising approach to overcome this challenge is federated learning, where training is done at site level on local data, and only the model parameters are exchanged over the network to generate a global model. In this study, we explore the feasibility of leveraging federated learning for privacy-preserving training of deep neural networks for phytoplankton classification. More specifically, we simulate two different federated learning frameworks, federated learning (FL) and mutually exclusive FL (ME-FL), and compare their performance to a traditional centralized learning (CL) framework. Experimental results from this study demonstrate the feasibility and potential of federated learning for phytoplankton monitoring.
translated by 谷歌翻译
Computer vision and machine learning are playing an increasingly important role in computer-assisted diagnosis; however, the application of deep learning to medical imaging has challenges in data availability and data imbalance, and it is especially important that models for medical imaging are built to be trustworthy. Therefore, we propose TRUDLMIA, a trustworthy deep learning framework for medical image analysis, which adopts a modular design, leverages self-supervised pre-training, and utilizes a novel surrogate loss function. Experimental evaluations indicate that models generated from the framework are both trustworthy and high-performing. It is anticipated that the framework will support researchers and clinicians in advancing the use of deep learning for dealing with public health crises including COVID-19.
translated by 谷歌翻译
Breast cancer is the second most common type of cancer in women in Canada and the United States, representing over 25% of all new female cancer cases. Neoadjuvant chemotherapy treatment has recently risen in usage as it may result in a patient having a pathologic complete response (pCR), and it can shrink inoperable breast cancer tumors prior to surgery so that the tumor becomes operable, but it is difficult to predict a patient's pathologic response to neoadjuvant chemotherapy. In this paper, we investigate the efficacy of leveraging learnt volumetric deep features from a newly introduced magnetic resonance imaging (MRI) modality called synthetic correlated diffusion imaging (CDI$^s$) for the purpose of pCR prediction. More specifically, we leverage a volumetric convolutional neural network to learn volumetric deep radiomic features from a pre-treatment cohort and construct a predictor based on the learnt features using the post-treatment response. As the first study to explore the utility of CDI$^s$ within a deep learning perspective for clinical decision support, we evaluated the proposed approach using the ACRIN-6698 study against those learnt using gold-standard imaging modalities, and found that the proposed approach can provide enhanced pCR prediction performance and thus may be a useful tool to aid oncologists in improving recommendation of treatment of patients. Subsequently, this approach to leverage volumetric deep radiomic features (which we name Cancer-Net BCa) can be further extended to other applications of CDI$^s$ in the cancer domain to further improve prediction performance.
translated by 谷歌翻译
The task of motion forecasting is critical for self-driving vehicles (SDVs) to be able to plan a safe maneuver. Towards this goal, modern approaches reason about the map, the agents' past trajectories and their interactions in order to produce accurate forecasts. The predominant approach has been to encode the map and other agents in the reference frame of each target agent. However, this approach is computationally expensive for multi-agent prediction as inference needs to be run for each agent. To tackle the scaling challenge, the solution thus far has been to encode all agents and the map in a shared coordinate frame (e.g., the SDV frame). However, this is sample inefficient and vulnerable to domain shift (e.g., when the SDV visits uncommon states). In contrast, in this paper, we propose an efficient shared encoding for all agents and the map without sacrificing accuracy or generalization. Towards this goal, we leverage pair-wise relative positional encodings to represent geometric relationships between the agents and the map elements in a heterogeneous spatial graph. This parameterization allows us to be invariant to scene viewpoint, and save online computation by re-using map embeddings computed offline. Our decoder is also viewpoint agnostic, predicting agent goals on the lane graph to enable diverse and context-aware multimodal prediction. We demonstrate the effectiveness of our approach on the urban Argoverse 2 benchmark as well as a novel highway dataset.
translated by 谷歌翻译
我们介绍了一项对自然语言(NL)推理的人类通知,开放域和逻辑上复杂且多样的数据集,配备了一阶逻辑(fol)注释。对开本由1,435个示例(独特的结论)组成,每个示例与487组前提之一搭配,这些场所作为规则,可用于演绎理由,以理解每个结论的有效性。前提和结论的逻辑正确性是通过其平行注释来确保的,这些注释会自动由我们的FOL推理引擎验证。除了主要的NL推理任务外,对开本中的NL-FOL对自动构成了使用FOL作为逻辑形式的新的NL-FOL翻译数据集。我们对广泛的实验系统地评估了对中型语言模型(BERT,ROBERTA)进行微调的FOL推理能力,并且在大型语言模型(GPT-NEOX,OPT,OPT,GPT-3,Codex)上促成了很少的射击。对于NL-FOL翻译,我们尝试使用GPT-3和Codex。我们的结果表明,公开可用的最强大的大语言模型之一(LLM),GPT-3 Davinci,仅比随机结果略好,而在一部分集的一部分中,该模型尤其不好,并且在预测该模型方面尤其不好。纠正虚假和未知结论的真实价值。我们的数据集和代码可在https://github.com/yale-lily/folio上找到。
translated by 谷歌翻译
通过迭代优化的深度卷积神经网络(CNN)培训在寻找最佳参数方面取得了令人难以置信的成功。但是,现代CNN架构通常包含数百万个参数。因此,单个体系结构的任何给定模型都存在于大型参数空间中。具有相似损失的模型可能具有截然不同的特征,例如对抗鲁棒性,概括性和量化鲁棒性。对于边缘的深度学习,量化鲁棒性通常至关重要。找到一个量化的模型有时可能需要大量的努力。使用Graph Hypernetworks(GHN)的最新作品显示出了出色的性能,可预测CNN体系结构的高性能参数。受这些成功的启发,我们想知道是否还可以利用GHN-2的图表来预测量化 - 射击参数,我们称为GHN-Q。我们进行了有史以来的第一项研究,探讨了图形超网的使用来预测看不见的量化CNN体系结构的参数。我们专注于减少的CNN搜索空间,并发现GHN-Q实际上可以预测各种8位量化CNN的量化固定参数。即使没有对其进行训练,即使在4位量化的情况下,也可以观察到不错的量化精度。在较低的位宽处进行量化的GHN-Q填充可能会带来进一步的改进,目前正在探索。
translated by 谷歌翻译
随着越来越多的深度学习对在设备上的Tinyml应用程序的采用,人们对对边缘进行优化的更有效的神经网络骨架的需求不断增加。最近,注意力冷凝器网络的引入导致低英寸,高效,自我发挥的神经网络,在准确性和速度之间取得了强大的平衡。在这项研究中,我们介绍了一种新的更快的注意力冷凝器设计,称为双感应注意力冷凝器,以实现更多的冷凝特征嵌入。我们进一步采用了机器驱动的设计探索策略,该策略施加了最佳实践设计限制,以提高效率和稳健性,以产生骨干的宏观构造结构。与其他几个其他最先进的有效骨架相比,所得的主链(我们命名为“参加”)在嵌入式ARM处理器上的推理吞吐量明显更高(以较高的精度和速度比FB-NET C快> 10倍)小型型号尺寸(以较高的速度和类似的精度小于OFA-62小1.47倍),并且准确性(以更高速度的ImageNet上的MobileVit Xs高1.1%)。这些有希望的结果表明,探索不同的有效体系结构设计和自我注意力的机制可以为Tinyml应用带来有趣的新构建块。
translated by 谷歌翻译
鉴于问题的复杂性,从各种传感器模式到高度纠缠的对象布局,再到多样化的项目属性和抓地力类型,因此对视觉驱动的机器人系统提出了重大挑战。现有方法通常从一个角度解决问题。各种项目和复杂的垃圾箱场景需要多种选择策略以及高级推理。因此,要构建可靠的机器学习算法来解决这项复杂的任务,需要大量的全面和高质量的数据。在现实世界中收集此类数据将太昂贵,时间过高,因此从可伸缩性角度来看。为了解决这个大型,多样化的数据问题,我们从最近的元素概念上的增长中获得了灵感,并引入了MetagraspNet,这是一种通过基于物理学的元合成构建的大规模的照片现实垃圾箱挑选数据集。所提出的数据集在82种不同的文章类型上包含217K RGBD图像,并具有完整的注释,可用于对象检测,Amodal感知,关键点检测,操纵顺序和平行jaw和真空吸尘器的Ambidextrous Grasp标签。我们还提供了一个真实的数据集,该数据集由超过2.3k全面注释的高质量RGBD图像组成,分为5个困难级别和一个看不见的对象,以评估不同的对象和布局属性。最后,我们进行了广泛的实验,表明我们提出的真空密封模型和合成数据集实现了最先进的性能,并将其推广到现实世界用例。
translated by 谷歌翻译
气候变化正在增加有害藻华(HAB)的频率和严重程度,这些藻类在水产养殖场中造成大量鱼类死亡。这有助于海洋污染和温室气体(GHG)的排放,因为死鱼要么被倾倒到海洋中,要么被带到垃圾填埋场,进而对气候产生负面影响。当前,列举有害藻类和其他浮游植物的标准方法是在显微镜下手动观察并对其进行计数。这是一个耗时,乏味且容易出错的过程,导致农民的管理决定妥协。因此,自动化此过程以进行快速准确的HAB监控非常有帮助。但是,这需要大量且多样化的浮游植物图像数据集,并且这些数据集很难快速生产。在这项工作中,我们探讨了产生新型高分辨率的光真逼真的合成浮游植物图像的可行性,这些图像包含相同图像中的多个物种,并且给定了一小部分真实图像。为此,我们采用生成的对抗网络(GAN)来生成合成图像。我们使用标准图像质量指标评估了三种不同的GAN架构:ProjectedGan,Fastgan和styleganv2。我们从经验上显示了仅使用961个真实图像的训练数据集的高保真合成浮游植物图像的产生。因此,这项工作证明了甘斯从小型培训数据集中创建大型浮游植物的大型合成数据集的能力,从而朝着可持续的系统监测有害藻类绽放迈出了关键的一步。
translated by 谷歌翻译