在本文中,提出了一种基于进发神经网络的方法来减少单眼视觉探针算法漂移的方法。视觉轨道图算法计算连续摄像机框架之间车辆的增量运动,然后集成这些增量以确定车辆的姿势。提出的神经网络减少了车辆的姿势估计中的误差,这是由于特征检测和匹配,摄像机固有参数等不准确而导致的。这些不准确性传播到对车辆的运动估计,从而导致大量估计误差。降低神经网络的漂移基于连续的摄像头框架中特征的运动来识别此类错误,从而导致更准确的增量运动估计值。使用KITTI数据集对拟议的漂移减少神经网络进行了训练和验证,结果表明,所提出的方法在减少增量方向估计中的误差方面的疗效,从而减少了姿势估计中的总体错误。
translated by 谷歌翻译
A Complete Computer vision system can be divided into two main categories: detection and classification. The Lane detection algorithm is a part of the computer vision detection category and has been applied in autonomous driving and smart vehicle systems. The lane detection system is responsible for lane marking in a complex road environment. At the same time, lane detection plays a crucial role in the warning system for a car when departs the lane. The implemented lane detection algorithm is mainly divided into two steps: edge detection and line detection. In this paper, we will compare the state-of-the-art implementation performance obtained with both FPGA and GPU to evaluate the trade-off for latency, power consumption, and utilization. Our comparison emphasises the advantages and disadvantages of the two systems.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Neuromorphic computing using biologically inspired Spiking Neural Networks (SNNs) is a promising solution to meet Energy-Throughput (ET) efficiency needed for edge computing devices. Neuromorphic hardware architectures that emulate SNNs in analog/mixed-signal domains have been proposed to achieve order-of-magnitude higher energy efficiency than all-digital architectures, however at the expense of limited scalability, susceptibility to noise, complex verification, and poor flexibility. On the other hand, state-of-the-art digital neuromorphic architectures focus either on achieving high energy efficiency (Joules/synaptic operation (SOP)) or throughput efficiency (SOPs/second/area), resulting in poor ET efficiency. In this work, we present THOR, an all-digital neuromorphic processor with a novel memory hierarchy and neuron update architecture that addresses both energy consumption and throughput bottlenecks. We implemented THOR in 28nm FDSOI CMOS technology and our post-layout results demonstrate an ET efficiency of 7.29G $\text{TSOP}^2/\text{mm}^2\text{Js}$ at 0.9V, 400 MHz, which represents a 3X improvement over state-of-the-art digital neuromorphic processors.
translated by 谷歌翻译
最近在各种语音域应用中提出了卷积增强的变压器(构象异构体),例如自动语音识别(ASR)和语音分离,因为它们可以捕获本地和全球依赖性。在本文中,我们提出了一个基于构型的度量生成对抗网络(CMGAN),以在时间频率(TF)域中进行语音增强(SE)。发电机使用两阶段构象体块编码大小和复杂的频谱图信息,以模拟时间和频率依赖性。然后,解码器将估计分解为尺寸掩模的解码器分支,以滤除不需要的扭曲和复杂的细化分支,以进一步改善幅度估计并隐式增强相信息。此外,我们还包括一个度量歧视器来通过优化相应的评估评分来减轻度量不匹配。客观和主观评估表明,与三个语音增强任务(DeNoising,dereverberation和Super-Losity)中的最新方法相比,CMGAN能够表现出卓越的性能。例如,对语音库+需求数据集的定量降解分析表明,CMGAN的表现优于以前的差距,即PESQ为3.41,SSNR为11.10 dB。
translated by 谷歌翻译
在过去的十年中,在线教育在为全球学生提供负担得起的高质量教育方面的重要性越来越重要。随着越来越多的学生改用在线学习,这在全球大流行期间得到了进一步放大。大多数在线教育任务,例如课程建议,锻炼建议或自动化评估,都取决于跟踪学生的知识进步。这被称为文献中的\ emph {知识跟踪}问题。解决此问题需要收集学生评估数据,以反映他们的知识演变。在本文中,我们提出了一个新的知识跟踪数据集,名为“知识跟踪数据库”练习(DBE-KT22),该练习是在澳大利亚澳大利亚国立大学教授的课程中从在线学生锻炼系统中收集的。我们讨论了DBE-KT22数据集的特征,并将其与知识追踪文献中的现有数据集进行对比。我们的数据集可通过澳大利亚数据存档平台公开访问。
translated by 谷歌翻译
Starcraft II多代理挑战(SMAC)被创建为合作多代理增强学习(MARL)的具有挑战性的基准问题。 SMAC专注于星际争霸微管理的问题,并假设每个单元都由独立行动并仅具有本地信息的学习代理人单独控制;假定通过分散执行(CTDE)进行集中培训。为了在SMAC中表现良好,MARL算法必须处理多机构信贷分配和联合行动评估的双重问题。本文介绍了一种新的体系结构Transmix,这是一个基于变压器的联合行动值混合网络,与其他最先进的合作MARL解决方案相比,我们显示出高效且可扩展的。 Transmix利用变形金刚学习更丰富的混合功能的能力来结合代理的个人价值函数。它与以前的SMAC场景上的工作相当,并且在困难场景上胜过其他技术,以及被高斯噪音损坏的场景以模拟战争的雾。
translated by 谷歌翻译
自适应实验可以增加当前学生从教学干预的现场实验中获得更好结果的机会。在此类实验中,在收集更多数据时将学生分配到条件变化的可能性,因此可以将学生分配给可能表现更好的干预措施。数字教育环境降低了进行此类适应性实验的障碍,但很少在教育中应用。原因之一可能是研究人员可以访问很少的现实案例研究,这些案例研究说明了在特定情况下这些实验的优势和缺点。我们通过使用Thompson采样算法进行自适应实验来评估学生在学生中提醒的效果,并将其与传统的统一随机实验进行比较。我们将其作为有关如何进行此类实验的案例研究,并提出了有关自适应随机实验可能或多或少有用的条件的一系列开放问题。
translated by 谷歌翻译
我们展示了各种功能和类,可以通过改进机器学习协助的参数空间进行采样过程。特别注意设置理智默认值的目标是,不同问题要求的调整仍然很小。从查找参数空间的界限到在感兴趣的领域中积累样本的界限,可以使用此例程集来进行不同类型的分析。特别是,我们讨论了通过合并不同的机器学习模型来帮助的两种方法:回归和分类。我们表明,机器学习分类器可以为探索参数空间提供更高的效率。此外,我们引入了一种提升技术,以改善过程开始时的缓慢收敛性。在一些示例的帮助下,更好地解释了这些例程的使用,这些示例说明了人们可以获得的结果类型。我们还包括用于获取示例的代码的示例,以及可以对调整计算适应其他问题的调整的描述。我们通过在探索与测得的HigGS玻色子信号强度匹配的两个HIGGS DoubleT模型的参数空间时显示这些技术的影响来最终确定。本文使用的代码和有关如何使用它的说明可在网络上可用。
translated by 谷歌翻译
物联网(物联网)正在通过弥合信息技术(IT)和运营技术(OT)之间的差距来改变行业。机器正在与连接的传感器集成在一起,并通过智能分析应用程序管理,加速了数字化转型和业务运营。将机器学习(ML)带到工业设备是一个进步,旨在促进IT和OT的融合。但是,在工业物联网(IIOT)中开发ML应用程序提出了各种挑战,包括硬件异质性,ML模型的非标准化表示,设备和ML模型兼容性问题以及慢速应用程序开发。在这一领域的成功部署需要深入了解硬件,算法,软件工具和应用程序。因此,本文介绍了一个名为ML应用程序的名为“语义低代码工程”(SELOC-ML),该框架建立在低代码平台上,以利用语义Web技术来支持IIOT的ML应用程序的快速开发。 SELOC-ML使非专家能够轻松地模拟,发现,重复使用和对接ML模型和设备。可以根据匹配结果自动生成项目代码在硬件上部署。开发人员可以从称为食谱的语义应用模板中受益,从而快速原型最终用户应用程序。与工业ML分类案例研究中的传统方法相比,评估证实了至少三倍的工程努力,显示了SELOC-ML的效率和实用性。我们分享代码并欢迎任何贡献。
translated by 谷歌翻译