面部检测是计算机愿景领域的长期挑战,最终目标是准确地将人类面临着不受约束的环境。由于与姿势,图像分辨率,照明,闭塞和观点相关的混淆因素,使这些系统具有重要的技术障碍。据说,随着最近的机器学习的发展,面部检测系统实现了非凡的准确性,主要是基于数据驱动的深度学习模型[70]。虽然鼓励,限制了部署系统的面部检测性能和社会责任的关键方面是人类外观的固有多样性。每个人类的外表都反映了一个人的东西,包括他们的遗产,身份,经验和自我表达的可见表现。但是,有关面部检测系统如何在面对不同的面部尺寸和形状,肤色,身体修改和身体装饰方面进行良好的表现问题。为了实现这一目标,我们收集了独特的人类外观数据集,这是一种图像集,表示具有低频率的外观,并且往往是面部数据集的缺点。然后,我们评估了当前最先进的脸部检测模型,其能够检测这些图像中的面部。评估结果表明,面部检测算法对这些不同的外观没有概括。评估和表征当前的面部检测模型的状态将加速研究和开发,以创造更公平和更准确的面部检测系统。
translated by 谷歌翻译
Importance: The prevalence of severe mental illnesses (SMIs) in the United States is approximately 3% of the whole population. The ability to conduct risk screening of SMIs at large scale could inform early prevention and treatment. Objective: A scalable machine learning based tool was developed to conduct population-level risk screening for SMIs, including schizophrenia, schizoaffective disorders, psychosis, and bipolar disorders,using 1) healthcare insurance claims and 2) electronic health records (EHRs). Design, setting and participants: Data from beneficiaries from a nationwide commercial healthcare insurer with 77.4 million members and data from patients from EHRs from eight academic hospitals based in the U.S. were used. First, the predictive models were constructed and tested using data in case-control cohorts from insurance claims or EHR data. Second, performance of the predictive models across data sources were analyzed. Third, as an illustrative application, the models were further trained to predict risks of SMIs among 18-year old young adults and individuals with substance associated conditions. Main outcomes and measures: Machine learning-based predictive models for SMIs in the general population were built based on insurance claims and EHR.
translated by 谷歌翻译
神经网络的越来越大的规模及其越来越多的应用空间对更高的能量和记忆有效的人工智能特定硬件产生了需求。 venues为了缓解主要问题,von neumann瓶颈,包括内存和近记忆架构,以及算法方法。在这里,我们利用磁隧道结(MTJ)的低功耗和固有的二进制操作来展示基于MTJ的无源阵列的神经网络硬件推断。通常,由于设备到装置的变化,写入误差,寄生电阻和非前沿,在性能下将训练的网络模型转移到推动的硬件。为了量化这些硬件现实的效果,我们将300个唯一重量矩阵解决方案的23个唯一的重量矩阵解决方案进行分类,以分类葡萄酒数据集,用于分类准确性和写真保真度。尽管设备不完美,我们可以实现高达95.3%的软件等效精度,并在15 x 15 MTJ阵列中正确调整具有一系列设备尺寸的阵列。此调谐过程的成功表明,需要新的指标来表征混合信号硬件中再现的网络的性能和质量。
translated by 谷歌翻译
我们通过与与前面令牌的局部相似度,通过调节从大语料库检索的文档块来增强自动回归语言模型。尽管使用25美元\时分,我们的检索增强型变压器(RetroCro)的检索增强型变压器(RetroCr)对GPT-3和侏罗纪-1获得了可比性的性能。微调后,复古表演转换为下游知识密集型任务,如问题应答。复古结合了冷冻BERT猎犬,一种可微分的编码器和块状的横向机制,以预测基于数量级的令牌,而不是训练期间通常消耗的数量。我们通常从头开始训练复古,还可以快速改造预先接受的变压器,通过检索,仍然达到良好的性能。我们的工作通过以前所未有的规模开辟了通过显式内存改进语言模型的新途径。
translated by 谷歌翻译
粒状干扰在软机器人中的应用是最近和有希望的新技术,为创建更高的性能机器人设备提供令人兴奋的可能性。通过在含有颗粒物质内的膜内的真空压力施加真空压力来实现粒状干扰,并且从设计角度来看特别是有趣的,因为可以利用设计参数的多种设计参数来诱导各种有用的行为。迄今为止,已经研究了变量(如晶粒形状和尺寸)以及膜材料的效果作为诱导定制抓握性能的手段,但是由于其特定地,尚未研究其他主要贡献因素,膜形态膜形态学两种准确建模和制造的复杂性。该研究介绍了优化颗粒干扰夹具的膜形态,组合多材料3D打印和进化算法来搜索Materio中的各种形态设计空间的研究。在单个运行中打印整个几代,并测试夹持器保持力并用作健身测量。我们的方法是相对可扩展的,规避需要建模,并保证所考虑的夹具的真实表现。结果表明,膜形态是夹具性能的关键决定因素。可以看到常见的高性能设计以优化粒状夹持器产生抓地力的所有三种主要识别机制,与标准夹持形态有显着不同,并概括跨越一系列测试对象。
translated by 谷歌翻译
Making histopathology image classifiers robust to a wide range of real-world variability is a challenging task. Here, we describe a candidate deep learning solution for the Mitosis Domain Generalization Challenge 2022 (MIDOG) to address the problem of generalization for mitosis detection in images of hematoxylin-eosin-stained histology slides under high variability (scanner, tissue type and species variability). Our approach consists in training a rotation-invariant deep learning model using aggressive data augmentation with a training set enriched with hard negative examples and automatically selected negative examples from the unlabeled part of the challenge dataset. To optimize the performance of our models, we investigated a hard negative mining regime search procedure that lead us to train our best model using a subset of image patches representing 19.6% of our training partition of the challenge dataset. Our candidate model ensemble achieved a F1-score of .697 on the final test set after automated evaluation on the challenge platform, achieving the third best overall score in the MIDOG 2022 Challenge.
translated by 谷歌翻译
Supervised Question Answering systems (QA systems) rely on domain-specific human-labeled data for training. Unsupervised QA systems generate their own question-answer training pairs, typically using secondary knowledge sources to achieve this outcome. Our approach (called PIE-QG) uses Open Information Extraction (OpenIE) to generate synthetic training questions from paraphrased passages and uses the question-answer pairs as training data for a language model for a state-of-the-art QA system based on BERT. Triples in the form of <subject, predicate, object> are extracted from each passage, and questions are formed with subjects (or objects) and predicates while objects (or subjects) are considered as answers. Experimenting on five extractive QA datasets demonstrates that our technique achieves on-par performance with existing state-of-the-art QA systems with the benefit of being trained on an order of magnitude fewer documents and without any recourse to external reference data sources.
translated by 谷歌翻译
While the capabilities of autonomous systems have been steadily improving in recent years, these systems still struggle to rapidly explore previously unknown environments without the aid of GPS-assisted navigation. The DARPA Subterranean (SubT) Challenge aimed to fast track the development of autonomous exploration systems by evaluating their performance in real-world underground search-and-rescue scenarios. Subterranean environments present a plethora of challenges for robotic systems, such as limited communications, complex topology, visually-degraded sensing, and harsh terrain. The presented solution enables long-term autonomy with minimal human supervision by combining a powerful and independent single-agent autonomy stack, with higher level mission management operating over a flexible mesh network. The autonomy suite deployed on quadruped and wheeled robots was fully independent, freeing the human supervision to loosely supervise the mission and make high-impact strategic decisions. We also discuss lessons learned from fielding our system at the SubT Final Event, relating to vehicle versatility, system adaptability, and re-configurable communications.
translated by 谷歌翻译
While the brain connectivity network can inform the understanding and diagnosis of developmental dyslexia, its cause-effect relationships have not yet enough been examined. Employing electroencephalography signals and band-limited white noise stimulus at 4.8 Hz (prosodic-syllabic frequency), we measure the phase Granger causalities among channels to identify differences between dyslexic learners and controls, thereby proposing a method to calculate directional connectivity. As causal relationships run in both directions, we explore three scenarios, namely channels' activity as sources, as sinks, and in total. Our proposed method can be used for both classification and exploratory analysis. In all scenarios, we find confirmation of the established right-lateralized Theta sampling network anomaly, in line with the temporal sampling framework's assumption of oscillatory differences in the Theta and Gamma bands. Further, we show that this anomaly primarily occurs in the causal relationships of channels acting as sinks, where it is significantly more pronounced than when only total activity is observed. In the sink scenario, our classifier obtains 0.84 and 0.88 accuracy and 0.87 and 0.93 AUC for the Theta and Gamma bands, respectively.
translated by 谷歌翻译
Vocal Bursts -- short, non-speech vocalizations that convey emotions, such as laughter, cries, sighs, moans, and groans -- are an often-overlooked aspect of speech emotion recognition, but an important aspect of human vocal communication. One barrier to study of these interesting vocalizations is a lack of large datasets. I am pleased to introduce the EmoGator dataset, which consists of 32,040 samples from 365 speakers, 16.91 hours of audio; each sample classified into one of 30 distinct emotion categories by the speaker. Several different approaches to construct classifiers to identify emotion categories will be discussed, and directions for future research will be suggested. Data set is available for download from https://github.com/fredbuhl/EmoGator.
translated by 谷歌翻译