In the process of materials discovery, chemists currently need to perform many laborious, time-consuming, and often dangerous lab experiments. To accelerate this process, we propose a framework for robots to assist chemists by performing lab experiments autonomously. The solution allows a general-purpose robot to perform diverse chemistry experiments and efficiently make use of available lab tools. Our system can load high-level descriptions of chemistry experiments, perceive a dynamic workspace, and autonomously plan the required actions and motions to perform the given chemistry experiments with common tools found in the existing lab environment. Our architecture uses a modified PDDLStream solver for integrated task and constrained motion planning, which generates plans and motions that are guaranteed to be safe by preventing collisions and spillage. We present a modular framework that can scale to many different experiments, actions, and lab tools. In this work, we demonstrate the utility of our framework on three pouring skills and two foundational chemical experiments for materials synthesis: solubility and recrystallization. More experiments and updated evaluations can be found at https://ac-rad.github.io/arc-icra2023.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
在这个时代,智能和低功率视网膜假体的需求高度要求,在这个时代,可穿戴和可植入的设备用于众多医疗保健应用。在本文中,我们提出了一个节能动态场景处理框架(Spikesee),该框架结合了尖峰代表编码技术和生物启发的尖峰复发性神经网络(SRNN)模型,以实现智能处理和极端的低功耗计算。尖峰表示编码技术可以用稀疏的尖峰火车来解释动态场景,从而减少数据量。采用受人视网膜特殊结构和尖峰加工方法的启发的SRNN模型,以预测神经节细胞对动态场景的响应。实验结果表明,所提出的SRNN模型的Pearson相关系数达到0.93,这表现优于视网膜假体的最先进的处理框架。得益于尖峰表示和SRNN处理,该模型可以以无倍数的方式提取视觉特征。与基于卷积的复发神经网络(CRNN)处理框架相比,该框架可实现12倍的功率。我们提出的Spikesee可以通过较低的能源消耗来更准确地预测神经节细胞的响应,从而减轻了视网膜假体的精度和功率问题,并为可穿戴或可植入的假体提供了潜在的解决方案。
translated by 谷歌翻译
我们开发一个名为EasyCV的多合一计算机视觉工具箱,以促进使用各种SOTA计算机视觉方法。最近,我们将Yolox的Yolox-Pai(Yolox的改进版本)添加到EasyCV中。我们进行消融研究以研究某些检测方法对YOLOX的影响。我们还为Pai-blade提供了一种易于使用,用于加速基于Bladedisc和Tensorrt的推理过程。最后,在单个NVIDIA V100 GPU上,我们在1.0毫秒内收到可可延迟的42.8映射,该MAP比Yolov6快一点。简单但有效的预测变量API也在EasyCV中设计,以进行END2END对象检测。现在可以在以下网址获得代码和模型,请访问:https://github.com/alibaba/easycv。
translated by 谷歌翻译
在本文中,我们介绍了2022年多模式情感分析挑战(MUSE)的解决方案,其中包括Muse-Humor,Muse-Rection和Muse Surns Sub-Challenges。 2022年穆斯穆斯(Muse 2022)着重于幽默检测,情绪反应和多模式的情感压力,利用不同的方式和数据集。在我们的工作中,提取了不同种类的多模式特征,包括声学,视觉,文本和生物学特征。这些功能由Temma和Gru融合到自发机制框架中。在本文中,1)提取了一些新的音频功能,面部表达功能和段落级文本嵌入以进行准确的改进。 2)我们通过挖掘和融合多模式特征来显着提高多模式情感预测的准确性和可靠性。 3)在模型培训中应用有效的数据增强策略,以减轻样本不平衡问题并防止模型形成学习有偏见的主题字符。对于博物馆的子挑战,我们的模型获得了0.8932的AUC分数。对于Muse Rection子挑战,我们在测试集上的Pearson相关系数为0.3879,它的表现优于所有其他参与者。对于Muse Surst Sub-Challenge,我们的方法在测试数据集上的唤醒和价值都优于基线,达到了0.5151的最终综合结果。
translated by 谷歌翻译
通过将退出层添加到深度学习网络中,早期出口可以通过准确的结果终止推理。是退出还是继续下一层的被动决策必须经过每个预位的退出层,直到其退出为止。此外,还很难在推理收益旁调整计算平台的配置。通过合并低成本预测引擎,我们为计算和节能深度学习应用提供了预测出口框架。预测出口可以预测网络将退出的位置(即,建立剩余层的数量以完成推理),这可以通过按时何时退出而无需运行每个预定位置的退出层来有效地降低网络计算成本。此外,根据剩余层的数量,选择了正确的计算配置(即频率和电压)以执行网络以进一步节省能源。广泛的实验结果表明,与经典的深度学习网络相比,预测性退出可实现多达96.2%的计算减少和72.9%的能量。与最先进的退出策略相比,与早期退出相比,降低了12.8%的计算和37.6%的能量,鉴于相同的推理准确性和潜伏期。
translated by 谷歌翻译
最近,由于其优越的特征提取性能,深度神经网络(DNN)的应用在诸如计算机视觉(CV)和自然语言处理(NLP)之类的许多领域非常突出。但是,高维参数模型和大规模数学计算限制了执行效率,尤其是用于物联网(IoT)设备。与以前的云/边缘模式不同,为上行链路通信和仅用于设备的设备的巨大压力承担了无法实现的计算强度,我们突出了DNN模型的设备和边缘之间的协作计算,这可以实现良好的平衡通信负载和执行准确性。具体地,提出了一种系统的按需共引起框架来利用多分支结构,其中预先接受的alexNet通过\ emph {早期出口}右尺寸,并在中间DNN层划分。实施整数量化以进一步压缩传输位。结果,我们建立了一个新的深度加强学习(DRL)优化器 - 软演员 - 软件 - 软演员批评者,用于离散(SAC-D),它生成\ emph {退出点},\ emph {partition point},\ emph {压缩位通过软策略迭代。基于延迟和准确性意识奖励设计,这种优化器可以很好地适应动态无线信道等复杂环境和任意CPU处理,并且能够支持5G URLLC。 Raspberry PI 4和PC上的真实世界实验显示了所提出的解决方案的表现。
translated by 谷歌翻译
动作识别是通过广泛应用程序进行视频理解的重要任务。但是,开发有效的动作识别解决方案通常需要进行广泛的工程工作,以构建和测试模块及其超参数的不同组合。在此演示中,我们提出了Autovideo,这是一种用于自动视频动作识别的Python系统。Autovideo的特征是1)标准管道语言之后的高度模块化和可扩展的基础架构,2)管道构造的原始列表,3)数据驱动的调谐器来保存管道调整的努力,4)易于使用图形用户界面(GUI)。Autovideo在MIT许可证上发行,网址为https://github.com/datamllab/autovideo
translated by 谷歌翻译
视频字幕结合了视频理解和语言生成。与图像标题不同,描述具有几乎每个对象的细节的静态图像,视频字幕通常考虑一系列帧和偏置朝向聚焦对象的偏差,例如,保持焦点的对象,无论更改的背景如何。因此,检测和适当地容纳聚焦对象在视频字幕中是至关重要的。为了执行聚焦对象的描述并实现可控制的视频标题,我们提出了一种面向对象的非自动增加方法(O2NA),其执行三个步骤中的标题生成:1)识别聚焦对象并预测其在目标字幕中的位置; 2)生成相关的属性词和这些聚焦对象的关系词来形成标题草案; 3)将视频信息组合以将标题草案精炼到流利的最终标题。由于产生了聚焦的对象并领先于其他单词,因此难以应用逐字的自回归生成过程;相反,我们采用了非自动评级方法。在两个基准数据集,即MSR-VTT和MSVD上的实验证明了O2NA的有效性,这实现了与最先进的结果竞争,但具有更高的多样性和推理速度。
translated by 谷歌翻译
Masked image modeling (MIM) performs strongly in pre-training large vision Transformers (ViTs). However, small models that are critical for real-world applications cannot or only marginally benefit from this pre-training approach. In this paper, we explore distillation techniques to transfer the success of large MIM-based pre-trained models to smaller ones. We systematically study different options in the distillation framework, including distilling targets, losses, input, network regularization, sequential distillation, etc, revealing that: 1) Distilling token relations is more effective than CLS token- and feature-based distillation; 2) An intermediate layer of the teacher network as target perform better than that using the last layer when the depth of the student mismatches that of the teacher; 3) Weak regularization is preferred; etc. With these findings, we achieve significant fine-tuning accuracy improvements over the scratch MIM pre-training on ImageNet-1K classification, using all the ViT-Tiny, ViT-Small, and ViT-base models, with +4.2%/+2.4%/+1.4% gains, respectively. Our TinyMIM model of base size achieves 52.2 mIoU in AE20K semantic segmentation, which is +4.1 higher than the MAE baseline. Our TinyMIM model of tiny size achieves 79.6% top-1 accuracy on ImageNet-1K image classification, which sets a new record for small vision models of the same size and computation budget. This strong performance suggests an alternative way for developing small vision Transformer models, that is, by exploring better training methods rather than introducing inductive biases into architectures as in most previous works. Code is available at https://github.com/OliverRensu/TinyMIM.
translated by 谷歌翻译