Harmonic functions are abundant in nature, appearing in limiting cases of Maxwell's, Navier-Stokes equations, the heat and the wave equation. Consequently, there are many applications of harmonic functions, spanning applications from industrial process optimisation to robotic path planning and the calculation of first exit times of random walks. Despite their ubiquity and relevance, there have been few attempts to develop effective means of representing harmonic functions in the context of machine learning architectures, either in machine learning on classical computers, or in the nascent field of quantum machine learning. Architectures which impose or encourage an inductive bias towards harmonic functions would facilitate data-driven modelling and the solution of inverse problems in a range of applications. For classical neural networks, it has already been established how leveraging inductive biases can in general lead to improved performance of learning algorithms. The introduction of such inductive biases within a quantum machine learning setting is instead still in its nascent stages. In this work, we derive exactly-harmonic (conventional- and quantum-) neural networks in two dimensions for simply-connected domains by leveraging the characteristics of holomorphic complex functions. We then demonstrate how these can be approximately extended to multiply-connected two-dimensional domains using techniques inspired by domain decomposition in physics-informed neural networks. We further provide architectures and training protocols to effectively impose approximately harmonic constraints in three dimensions and higher, and as a corollary we report divergence-free network architectures in arbitrary dimensions. Our approaches are demonstrated with applications to heat transfer, electrostatics and robot navigation, with comparisons to physics-informed neural networks included.
translated by 谷歌翻译
Machine Learning models capable of handling the large datasets collected in the financial world can often become black boxes expensive to run. The quantum computing paradigm suggests new optimization techniques, that combined with classical algorithms, may deliver competitive, faster and more interpretable models. In this work we propose a quantum-enhanced machine learning solution for the prediction of credit rating downgrades, also known as fallen-angels forecasting in the financial risk management field. We implement this solution on a neutral atom Quantum Processing Unit with up to 60 qubits on a real-life dataset. We report competitive performances against the state-of-the-art Random Forest benchmark whilst our model achieves better interpretability and comparable training times. We examine how to improve performance in the near-term validating our ideas with Tensor Networks-based numerical simulations.
translated by 谷歌翻译
在工程和科学方面的许多计算问题中,功能或模型差异化是必不可少的,但还需要集成。一类重要的计算问题包括所谓的内形差异方程,包括函数的积分和衍生物。在另一个示例中,随机微分方程可以用随机变量的概率密度函数的部分微分方程编写。要根据密度函数学习随机变量的特征,需要计算特定的积分变换,即密度函数的特定矩。最近,物理知识神经网络的机器学习范式以越来越多的流行度作为一种通过利用自动分化来求解微分方程的方法。在这项工作中,我们建议通过自动集成来扩大物理知识的神经网络的范式,以计算训练有素的解决方案上的复杂积分转换,并求解在训练过程中在训练过程中计算积分的整数差异方程。此外,我们在各种应用程序设置中展示了这些技术,从数值模拟了基于量子计算机的神经网络以及经典的神经网络。
translated by 谷歌翻译
量子计算有望加快科学和工程中的一些最具挑战性问题。已经提出了量子算法,显示了从化学到物流优化的应用中的理论优势。科学和工程中出现的许多问题可以作为一组微分方程重写。用于求解微分方程的量子算法已经示出了容错量计算制度中的可提供的优势,其中深宽的量子电路可用于求解局部微分方程(PDES)的大型线性系统。最近,提出了求解非线性PDE的变分方法也具有近术语量子器件。最有前途的一般方法之一是基于近期科学机器学习领域的发展来解决PDE。我们将近期量子计算机的适用性扩展到更一般的科学机器学习任务,包括从测量数据集发现微分方程。我们使用可分辨率量子电路(DQC)来解决由操作员库参数化的等式,并在数据和方程的组合上执行回归。我们的结果显示了普通模型发现(QMOD)的有希望的路径,在经典和量子机器学习方法之间的界面上。我们在不同系统上展示了成功的参数推断和方程发现,包括二阶,常微分方程和非线性部分微分方程。
translated by 谷歌翻译
Point-of-Care Ultrasound (POCUS) refers to clinician-performed and interpreted ultrasonography at the patient's bedside. Interpreting these images requires a high level of expertise, which may not be available during emergencies. In this paper, we support POCUS by developing classifiers that can aid medical professionals by diagnosing whether or not a patient has pneumothorax. We decomposed the task into multiple steps, using YOLOv4 to extract relevant regions of the video and a 3D sparse coding model to represent video features. Given the difficulty in acquiring positive training videos, we trained a small-data classifier with a maximum of 15 positive and 32 negative examples. To counteract this limitation, we leveraged subject matter expert (SME) knowledge to limit the hypothesis space, thus reducing the cost of data collection. We present results using two lung ultrasound datasets and demonstrate that our model is capable of achieving performance on par with SMEs in pneumothorax identification. We then developed an iOS application that runs our full system in less than 4 seconds on an iPad Pro, and less than 8 seconds on an iPhone 13 Pro, labeling key regions in the lung sonogram to provide interpretable diagnoses.
translated by 谷歌翻译
Systems Biology试图创建生物系统的数学模型,以减少固有的生物学复杂性,并为治疗性开发等应用提供预测。但是,确定哪种数学模型正确以及如何最佳地到达答案仍然是一个挑战。我们提出了一种使用系统生物学和可能性无推理方法的数学模型选择自动生物学模型选择的算法。我们的算法显示,在实验生物学和随机搜索中使用的常规启发式方法的先验信息中,在正确的模型中表现出了改善的性能。该方法显示有望加速生物基础科学和药物发现。
translated by 谷歌翻译
稀疏变分高斯工艺(SVGP)方法是由于其计算效益的非共轭高斯工艺推论的常见选择。在本文中,我们通过使用双重参数化来提高其计算效率,其中每个数据示例被分配双参数,类似于期望传播中使用的站点参数。我们使用自然梯度下降的双重参数化速度推断,并提供了较小的证据,用于近似参数学习。该方法具有与当前SVGP方法相同的内存成本,但它更快,更准确。
translated by 谷歌翻译
Logic Mill is a scalable and openly accessible software system that identifies semantically similar documents within either one domain-specific corpus or multi-domain corpora. It uses advanced Natural Language Processing (NLP) techniques to generate numerical representations of documents. Currently it leverages a large pre-trained language model to generate these document representations. The system focuses on scientific publications and patent documents and contains more than 200 million documents. It is easily accessible via a simple Application Programming Interface (API) or via a web interface. Moreover, it is continuously being updated and can be extended to text corpora from other domains. We see this system as a general-purpose tool for future research applications in the social sciences and other domains.
translated by 谷歌翻译
Charisma is considered as one's ability to attract and potentially also influence others. Clearly, there can be considerable interest from an artificial intelligence's (AI) perspective to provide it with such skill. Beyond, a plethora of use cases opens up for computational measurement of human charisma, such as for tutoring humans in the acquisition of charisma, mediating human-to-human conversation, or identifying charismatic individuals in big social data. A number of models exist that base charisma on various dimensions, often following the idea that charisma is given if someone could and would help others. Examples include influence (could help) and affability (would help) in scientific studies or power (could help), presence, and warmth (both would help) as a popular concept. Modelling high levels in these dimensions for humanoid robots or virtual agents, seems accomplishable. Beyond, also automatic measurement appears quite feasible with the recent advances in the related fields of Affective Computing and Social Signal Processing. Here, we, thereforem present a blueprint for building machines that can appear charismatic, but also analyse the charisma of others. To this end, we first provide the psychological perspective including different models of charisma and behavioural cues of it. We then switch to conversational charisma in spoken language as an exemplary modality that is essential for human-human and human-computer conversations. The computational perspective then deals with the recognition and generation of charismatic behaviour by AI. This includes an overview of the state of play in the field and the aforementioned blueprint. We then name exemplary use cases of computational charismatic skills before switching to ethical aspects and concluding this overview and perspective on building charisma-enabled AI.
translated by 谷歌翻译
This paper proposes a novel observer-based controller for Vertical Take-Off and Landing (VTOL) Unmanned Aerial Vehicle (UAV) designed to directly receive measurements from a Vision-Aided Inertial Navigation System (VA-INS) and produce the required thrust and rotational torque inputs. The VA-INS is composed of a vision unit (monocular or stereo camera) and a typical low-cost 6-axis Inertial Measurement Unit (IMU) equipped with an accelerometer and a gyroscope. A major benefit of this approach is its applicability for environments where the Global Positioning System (GPS) is inaccessible. The proposed VTOL-UAV observer utilizes IMU and feature measurements to accurately estimate attitude (orientation), gyroscope bias, position, and linear velocity. Ability to use VA-INS measurements directly makes the proposed observer design more computationally efficient as it obviates the need for attitude and position reconstruction. Once the motion components are estimated, the observer-based controller is used to control the VTOL-UAV attitude, angular velocity, position, and linear velocity guiding the vehicle along the desired trajectory in six degrees of freedom (6 DoF). The closed-loop estimation and the control errors of the observer-based controller are proven to be exponentially stable starting from almost any initial condition. To achieve global and unique VTOL-UAV representation in 6 DoF, the proposed approach is posed on the Lie Group and the design in unit-quaternion is presented. Although the proposed approach is described in a continuous form, the discrete version is provided and tested. Keywords: Vision-aided inertial navigation system, unmanned aerial vehicle, vertical take-off and landing, stochastic, noise, Robotics, control systems, air mobility, observer-based controller algorithm, landmark measurement, exponential stability.
translated by 谷歌翻译