本文介绍了新的六角形和五角形PEM燃料电池模型。在实现了改善的细胞性能后,这些模型已得到了优化。多目标优化算法的输入参数是入口处的压力和温度,消耗和输出功率是客观参数。数值模拟的输出数据已使用深神经网络训练,然后以多项式回归进行建模。已使用RSM(响应表面方法)提取目标函数,并使用多目标遗传算法(NSGA-II)优化了目标。与基本模型相比,优化的五角大楼和六边形模型分别将输出电流密度增加21.8%和39.9%。
translated by 谷歌翻译
在这项综合研究中,通过基于能量环境分析添加入口空气冷却和再生冷却来评估涡轮轴发动机。首先,飞行器数量,飞行高度,主要周期中压缩机1的压缩比,主周期中涡轮-1的涡轮入口温度,涡轮-2的温度分数,辅助的压缩比循环和入口空气冷却系统中的进气温变化,这些功能性能参数的某些功能性能参数,配备了带有入口空气冷却系统的再生涡轮轴发动机周期,例如功率特异性的燃油消耗,功率输出,热效率和硝酸盐氧化物的质量流量(质量流量) NOX)通过使用氢作为燃料工作,研究了NO和NO2。因此,基于分析,开发了一个模型来预测带有冷却空气冷却系统基于深神经网络(DNN)的再生涡轮轴发动机周期的能量环境性能层。该模型提出的旨在预测含有NO和NO2的氮化物氧化物(NOX)的质量流量和质量流量。结果证明了综合DNN模型的准确性,具有适当的MSE,MAE和RMSD成本函数,用于验证测试和培训数据。同样,对于热效率和NOX发射质量流量,对于热效率的验证和NOX发射质量流量质量预测值及其测试数据,R和R^2都非常接近1。
translated by 谷歌翻译
在本文中,使用计算流体动力学研究了具有次级通道和肋骨的微通道设计,并与多目标优化算法耦合,以确定并提出基于观察到的热阻力和泵送功率的最佳溶液。提出了一种结合拉丁超立方体采样,基于机器学习的替代建模和多目标优化的工作流程。在寻找最佳替代物期间,考虑了随机森林,梯度增强算法和神经网络。我们证明了调整的神经网络可以做出准确的预测,并用于创建可接受的替代模型。与常规优化方法相比,优化解决方案在总体性能上显示出可忽略的差异。此外,解决方案是在原始时间的五分之一中计算的。在与对流微通道设计相同的压力极限下,生成的设计达到的温度低于10%以上。当受到温度的限制时,压降降低了25%以上。最后,通过采用Shapley添加说明技术研究了每个设计变量对热电阻和泵送功率的影响。总体而言,我们已经证明了所提出的框架具有优点,可以用作微通道散热器设计优化的可行方法。
translated by 谷歌翻译
在本研究中,飞行射击数,飞行高度,燃料类型和进气温的影响对推力燃油消耗,推力,进气质量流量,热和推进效率的影响,以及发驱动效率和效率研究了F135 PW100发动机中的充电破坏率。根据第一阶段获得的结果,为了对上述发动机周期的热力学性能进行建模,飞行仪数和飞行高度分别被认为分别为2.5和30,000 m。由于在高空飞行条件下超音速飞行的运行优势以及氢气的较高推力。因此,在第二阶段,考虑到上述飞行条件,已经获得了智能模型,以预测使用深度学习方法的输出参数(即推力,推力特定的燃料消耗和整体燃油效率)。在达到的深神经模型中,高压涡轮机,风扇压力比,涡轮机入口温度,进气温度和旁路比的压力比被视为输入参数。提供的数据集随机分为两组:第一组包含6079个用于模型训练的样本,第二组包含1520个用于测试的样本。特别是,ADAM优化算法,均方根误差的成本函数以及整流线性单元的活动函数用于训练网络。结果表明,深神经模型的误差百分比等于5.02%,1.43%和2.92%,以预测推力,推力特定的燃油消耗和整体自我效率,这表明已达到的模型在估计估算中的成功成功本问题的输出参数。
translated by 谷歌翻译
海浪可再生能源快速成为近几十年来可再生能源行业的关键部分。通过在该过程中开发波能转换器作为主转换器技术,研究了它们的电力起飞(PTO)系统。调整PTO参数是一个具有挑战性的优化问题,因为这些参数与吸收功率输出之间存在复杂和非线性关系。在这方面,本研究旨在优化在澳大利亚海岸的珀斯的波路场景中的点吸收波能量转换器的PTO系统参数。转换器在数量上设计成振荡,以防止不规则,并且执行PTO设置的多维波和灵敏度分析。然后,要找到导致最高功率输出的最佳PTO系统参数,并入了十种优化算法,以解决非线性问题,包括Nelder-Mead搜索方法,主动集方法,顺序二次编程方法(SQP),多节透视优化器(MVO)和六种改进的遗传,代理和Fminsearch算法组合。在可行性景观分析之后,执行优化结果并在PTO系统设置方面提供最佳答案。最后,调查表明,遗传,替代和FMINSEARCH算法的修改组合可以优于所研究的波场景中的其他组合,以及PTO系统变量之间的相互作用。
translated by 谷歌翻译
An enhanced geothermal system is essential to provide sustainable and long-term geothermal energy supplies and reduce carbon emissions. Optimal well-control scheme for effective heat extraction and improved heat sweep efficiency plays a significant role in geothermal development. However, the optimization performance of most existing optimization algorithms deteriorates as dimension increases. To solve this issue, a novel surrogate-assisted level-based learning evolutionary search algorithm (SLLES) is proposed for heat extraction optimization of enhanced geothermal system. SLLES consists of classifier-assisted level-based learning pre-screen part and local evolutionary search part. The cooperation of the two parts has realized the balance between the exploration and exploitation during the optimization process. After iteratively sampling from the design space, the robustness and effectiveness of the algorithm are proven to be improved significantly. To the best of our knowledge, the proposed algorithm holds state-of-the-art simulation-involved optimization framework. Comparative experiments have been conducted on benchmark functions, a two-dimensional fractured reservoir and a three-dimensional enhanced geothermal system. The proposed algorithm outperforms other five state-of-the-art surrogate-assisted algorithms on all selected benchmark functions. The results on the two heat extraction cases also demonstrate that SLLES can achieve superior optimization performance compared with traditional evolutionary algorithm and other surrogate-assisted algorithms. This work lays a solid basis for efficient geothermal extraction of enhanced geothermal system and sheds light on the model management strategies of data-driven optimization in the areas of energy exploitation.
translated by 谷歌翻译
Artificial Intelligence (AI) and Machine Learning (ML) are weaving their way into the fabric of society, where they are playing a crucial role in numerous facets of our lives. As we witness the increased deployment of AI and ML in various types of devices, we benefit from their use into energy-efficient algorithms for low powered devices. In this paper, we investigate a scale and medium that is far smaller than conventional devices as we move towards molecular systems that can be utilized to perform machine learning functions, i.e., Molecular Machine Learning (MML). Fundamental to the operation of MML is the transport, processing, and interpretation of information propagated by molecules through chemical reactions. We begin by reviewing the current approaches that have been developed for MML, before we move towards potential new directions that rely on gene regulatory networks inside biological organisms as well as their population interactions to create neural networks. We then investigate mechanisms for training machine learning structures in biological cells based on calcium signaling and demonstrate their application to build an Analog to Digital Converter (ADC). Lastly, we look at potential future directions as well as challenges that this area could solve.
translated by 谷歌翻译
传统的生物和制药工厂由人类工人或预定义阈值控制。现代化的工厂具有高级过程控制算法,例如模型预测控制(MPC)。但是,几乎没有探索将深入的增强学习来控制制造厂。原因之一是缺乏高保真模拟和基准测试的标准API。为了弥合这一差距,我们开发了一个易于使用的库,其中包括五个高保真模拟环境:BeerfMtenV,Reactorenv,Atropineenv,Pensimenv和Mabenv,涵盖了广泛的制造过程。我们在已发布的动态模型上构建这些环境。此外,我们在线和离线基准基准,基于模型和无模型的强化学习算法,用于比较后续研究。
translated by 谷歌翻译
我们提出了一种在多孔培养基中使用物理知识的神经网络(PINNS)中多相热力学(THM)过程中的参数鉴定的解决方案策略。我们采用无量纲的理事方程式,特别适合逆问题,我们利用了我们先前工作中开发的顺序多物理Pinn求解器。我们在多个基准问题上验证了所提出的反模型方法,包括Terzaghi的等温固结问题,Barry-Mercer的等温注射产生问题以及非饱和土壤层的非等热整合。我们报告了提出的顺序PINN-THM逆求器的出色性能,从而为将PINNS应用于复杂非线性多物理问题的逆建模铺平了道路。
translated by 谷歌翻译
物理信息的神经网络(PINN)是神经网络(NNS),它们作为神经网络本身的组成部分编码模型方程,例如部分微分方程(PDE)。如今,PINN是用于求解PDE,分数方程,积分分化方程和随机PDE的。这种新颖的方法已成为一个多任务学习框架,在该框架中,NN必须在减少PDE残差的同时拟合观察到的数据。本文对PINNS的文献进行了全面的综述:虽然该研究的主要目标是表征这些网络及其相关的优势和缺点。该综述还试图将出版物纳入更广泛的基于搭配的物理知识的神经网络,这些神经网络构成了香草·皮恩(Vanilla Pinn)以及许多其他变体,例如物理受限的神经网络(PCNN),各种HP-VPINN,变量HP-VPINN,VPINN,VPINN,变体。和保守的Pinn(CPINN)。该研究表明,大多数研究都集中在通过不同的激活功能,梯度优化技术,神经网络结构和损耗功能结构来定制PINN。尽管使用PINN的应用范围广泛,但通过证明其在某些情况下比有限元方法(FEM)等经典数值技术更可行的能力,但仍有可能的进步,最著名的是尚未解决的理论问题。
translated by 谷歌翻译
种植植被是降低沉积物转移率的实用解决方案之一。植被覆盖的增加可降低环境污染和沉积物的运输速率(STR)。由于沉积物和植被相互作用复杂,因此预测沉积物的运输速率具有挑战性。这项研究旨在使用新的和优化的数据处理方法(GMDH)的新版本(GMDH)预测植被覆盖的沉积物传输速率。此外,这项研究介绍了一种用于预测沉积物传输速率的新集合模型。模型输入包括波高,波速,密度覆盖,波力,D50,植被盖的高度和盖茎直径。独立的GMDH模型和优化的GMDH模型,包括GMDH Honey Badger算法(HBA)GMDH大鼠群群算法(RSOA)VGMDH正弦余弦算法(SCA)和GMDH颗粒swarm swarm优化率(GMDH-PSO),用于预测沉积率(GMDH-PSO) 。作为下一步,使用独立的GMDH的输出来构建集合模型。合奏模型的MAE为0.145 m3/s,而GMDH-HBA,GMDH-RSOA,GMDH-SCA,GMDH-PSOA和GMDH的MAE在测试水平为0.176 M3/s,0.312 M3/s,0.367/s,0.367 M3/s,0.498 m3/s和0.612 m3/s。集合模型的Nash Sutcliffe系数(NSE),GMDH-HBA,GMDH-RSOA,GMDH-SCA,GMDH-PSOA和GHMDH分别为0.95 0.93、0.89、0.89、0.86、0.86、0.82和0.76。此外,这项研究表明,植被覆盖的沉积物运输速率降低了90%。结果表明,合奏和GMDH-HBA模型可以准确预测沉积物的传输速率。根据这项研究的结果,可以使用IMM和GMDH-HBA监测沉积物的传输速率。这些结果对于管理和规划大盆地的水资源很有用。
translated by 谷歌翻译
Data Centers are huge power consumers, both because of the energy required for computation and the cooling needed to keep servers below thermal redlining. The most common technique to minimize cooling costs is increasing data room temperature. However, to avoid reliability issues, and to enhance energy efficiency, there is a need to predict the temperature attained by servers under variable cooling setups. Due to the complex thermal dynamics of data rooms, accurate runtime data center temperature prediction has remained as an important challenge. By using Gramatical Evolution techniques, this paper presents a methodology for the generation of temperature models for data centers and the runtime prediction of CPU and inlet temperature under variable cooling setups. As opposed to time costly Computational Fluid Dynamics techniques, our models do not need specific knowledge about the problem, can be used in arbitrary data centers, re-trained if conditions change and have negligible overhead during runtime prediction. Our models have been trained and tested by using traces from real Data Center scenarios. Our results show how we can fully predict the temperature of the servers in a data rooms, with prediction errors below 2 C and 0.5 C in CPU and server inlet temperature respectively.
translated by 谷歌翻译
近年来,生成设计技术已在许多应用领域,尤其是在工程领域中牢固地建立。这些方法证明了由于前景有希望的增长。但是,现有方法受到考虑的问题的特异性受到限制。此外,它们不提供所需的灵活性。在本文中,我们为任意生成设计问题制定了一般方法,并提出了名为Gefest(编码结构的生成进化)的新颖框架。开发的方法基于三个一般原则:采样,估计和优化。这样可以确保方法调整特定生成设计问题的方法的自由度,因此可以构建最合适的方法。进行了一系列实验研究,以确认Gefest框架的有效性。它涉及合成和现实情况(沿海工程,微流体,热力学和油田计划)。 Gefest的柔性结构使得获得超过基线溶液的结果。
translated by 谷歌翻译
在本文中,我们对数值模拟的加速感兴趣。我们专注于高超音速行星再入问题,该问题涉及耦合流体动力学和化学反应。模拟化学反应需要大部分计算时间,但另一方面,无法避免获得准确的预测。我们面临成本效率和准确性之间的权衡:模拟代码必须足够有效地在操作环境中使用,但必须足够准确,以忠实地预测现象。为了解决这个权衡,我们设计了一个混合模拟代码,将传统的流体动态求解器与近似化学反应的神经网络耦合。当在大数据上下文中应用以及它们源于其矩阵矢量结构的效率时,我们依靠它们的力量来实现重要的加速因子($ \ tims 10 $至$ \ times 18.6 $)。本文旨在解释我们如何在实践中设计这种具有成本效益的混合模拟代码。最重要的是,我们描述了确保准确性保证的方法论,使我们能够超越传统的替代建模,并将这些代码用作参考。
translated by 谷歌翻译
Solute transport in porous media is relevant to a wide range of applications in hydrogeology, geothermal energy, underground CO2 storage, and a variety of chemical engineering systems. Due to the complexity of solute transport in heterogeneous porous media, traditional solvers require high resolution meshing and are therefore expensive computationally. This study explores the application of a mesh-free method based on deep learning to accelerate the simulation of solute transport. We employ Physics-informed Neural Networks (PiNN) to solve solute transport problems in homogeneous and heterogeneous porous media governed by the advection-dispersion equation. Unlike traditional neural networks that learn from large training datasets, PiNNs only leverage the strong form mathematical models to simultaneously solve for multiple dependent or independent field variables (e.g., pressure and solute concentration fields). In this study, we construct PiNN using a periodic activation function to better represent the complex physical signals (i.e., pressure) and their derivatives (i.e., velocity). Several case studies are designed with the intention of investigating the proposed PiNN's capability to handle different degrees of complexity. A manual hyperparameter tuning method is used to find the best PiNN architecture for each test case. Point-wise error and mean square error (MSE) measures are employed to assess the performance of PiNNs' predictions against the ground truth solutions obtained analytically or numerically using the finite element method. Our findings show that the predictions of PiNN are in good agreement with the ground truth solutions while reducing computational complexity and cost by, at least, three orders of magnitude.
translated by 谷歌翻译
这项工作介绍了基于数字双胞胎方法的自主烹饪过程的概念。它提出了一种基于物理的完整订单模拟的混合方法,然后是数据驱动的系统识别过程,错误错误。它可以在设备级别上可行的数字双胞胎的时间更快,而无需云或高性能计算。该概念普遍适用于各种物理过程。
translated by 谷歌翻译
使热处理可控的一种可能的方法是收集有关产品当前状态的实时信息。通常,感觉设备无法轻松或根本捕获所有相关信息。数字双胞胎在实时模拟中使用虚拟探针缩小了这一差距,并与该过程同步。本文提出了一个基于物理的,数据驱动的数字双框架,用于自动食品处理。我们建议使用设备级别可执行的精益数字双胞胎概念,需要最小的计算负载,数据存储和传感器数据要求。这项研究重点是用于热过程的非侵入性降低模型(ROM)的简约实验设计。在训练数据中表面温度的高标准偏差与ROM测试中的均方根误差之间的高标准偏差之间的相关性($ r = -0.76 $)可以有效地选择训练数据。最佳ROM的平均均方根误差小于代表性测试集的1 kelvin(0.2%平均平均百分比误差)。 SP $ \ $ 1.8E4的仿真速度允许进行设备模型预测控制。拟议的数字双框架旨在适用于行业。通常,一旦在未提供对求解器的根级访问(例如商业仿真软件)中执行该过程的建模,就需要一旦在软件中执行该过程的建模,就需要进行非侵入式降级建模。仅使用一个数据集就可以实现降顺序模型的数据驱动训练,因为使用相关性来预测训练成功。
translated by 谷歌翻译
可持续消费旨在最大限度地减少使用服务和产品的环境和社会影响。服务和产品的过度消耗导致潜在的自然资源耗尽和社会不平等,因为对商品和服务的访问变得更具挑战性。在日常生活中,一个人可以通过大大改变他们的生活方式选择并可能违背其个人价值观或愿望来实现更可持续的购买。相反,实现可持续消费,同时考虑个人价值观是一个更复杂的任务,因为在努力满足环境和个人目标时出现潜在的权衡。本文重点介绍了推荐系统的价值敏感设计,使消费者能够在尊重其个人价值观的同时提高购物的可持续性。可持续消费的价值敏感建议被形式化为多目标优化问题,每个目标都代表不同的可持续性目标和个人价值。新颖和现有的多目标算法计算解决此问题的解决方案。该解决方案被提出为消费者的个性化可持续篮子建议。这些建议在合成数据集中进行了评估,其中包括来自相关科学和组织报告的三个建立的现实数据集。合成数据集包含有关产品价格,营养价值和环境影响指标的定量数据,例如温室气体排放和水占地面积。推荐的篮子与消费者购买的篮子高度相似,并与可持续发展目标和与健康,支出和品味相关的个人价值观对齐。即使消费者只接受一小部分建议,也观察到环境影响的相当大降低。
translated by 谷歌翻译
调试后已显示建筑物的性能会大大降解,从而增加能源消耗和相关的温室气体排放。使用现有的传感器网络和IoT设备进行连续调试有可能通过不断识别系统退化并重新调整控制策略以适应真正的建筑绩效来最大程度地减少这种废物。由于其对温室气体排放的重大贡献,为建筑加热的气体锅炉系统的性能至关重要。锅炉性能研究的综述已用于开发一组常见的断层和降解的性能条件,这些断层已集成到MATLAB/SIMULINK模拟器中。这导致了一个标记的数据集,并为14个非谐波锅炉中的每一个都进行了大约10,000个稳态性能的模拟。收集的数据用于使用K-Nearest邻居,决策树,随机森林和支持向量机训练和测试故障分类。结果表明,支持向量机方法给出了最佳的预测准确性,始终超过90%,并且由于较低的分类精度,无法对多个锅炉进行概括。
translated by 谷歌翻译
Agent-based modeling (ABM) is a well-established paradigm for simulating complex systems via interactions between constituent entities. Machine learning (ML) refers to approaches whereby statistical algorithms 'learn' from data on their own, without imposing a priori theories of system behavior. Biological systems -- from molecules, to cells, to entire organisms -- consist of vast numbers of entities, governed by complex webs of interactions that span many spatiotemporal scales and exhibit nonlinearity, stochasticity and intricate coupling between entities. The macroscopic properties and collective dynamics of such systems are difficult to capture via continuum modelling and mean-field formalisms. ABM takes a 'bottom-up' approach that obviates these difficulties by enabling one to easily propose and test a set of well-defined 'rules' to be applied to the individual entities (agents) in a system. Evaluating a system and propagating its state over discrete time-steps effectively simulates the system, allowing observables to be computed and system properties to be analyzed. Because the rules that govern an ABM can be difficult to abstract and formulate from experimental data, there is an opportunity to use ML to help infer optimal, system-specific ABM rules. Once such rule-sets are devised, ABM calculations can generate a wealth of data, and ML can be applied there too -- e.g., to probe statistical measures that meaningfully describe a system's stochastic properties. As an example of synergy in the other direction (from ABM to ML), ABM simulations can generate realistic datasets for training ML algorithms (e.g., for regularization, to mitigate overfitting). In these ways, one can envision various synergistic ABM$\rightleftharpoons$ML loops. This review summarizes how ABM and ML have been integrated in contexts that span spatiotemporal scales, from cellular to population-level epidemiology.
translated by 谷歌翻译