Authored & Presented By Max Kassera (yasAI)
DescriptionThis workshop will introduce the participants to the applications of AI for simulation engineering. It will start with a beginner-friendly introduction to how AI systems work and how they are built. It will then cover use cases where AI outperforms traditional methods as well as cases where AI is not the optimal choice. Interactive elements and a Q&A are integrated to enhance understanding and engagement. Whether you're a beginner or have some knowledge of AI, this workshop will provide a balanced and realistic perspective on the evolving intersection of AI and simulation engineering.
Authored & Presented By Andy Richardson (Phronesim)
DescriptionThis short course introduces the core components necessary to build an effective and efficient simulation capability. The course then provides a framework for building and implementing a simulation strategy targeted at achieving the organisations product and business goals. • Goals for Modelling and Simulation [M&S]• Leadership Questions for M&S• Trends and Challenges Affecting your M&S• The Essential Elements for Effective and Efficient M&S• Engaging Stakeholders• Strategy Framework• Building your Strategy• The Essential Elements Mini Deep Dive (Process – Methods – Tools)• What’s your Status? - Maturity Assessment• Improvement Actions• Strategy Implementation• Strategy GovernanceAndy is a Fellow of the Institution of Mechanical Engineers. He has over 30 yrs experience in Automotive Engineering at Jaguar Land Rover, with 20 years at Senior Management level, including 10 years as Head of Engineering Simulation. In 2019 formed PHRONESIM Ltd to provide consulting in all aspects of Engineering Simulation Strategy helping multinational organisations with; simulation capability maturity assessment, simulation strategy development, improvement roadmaps, and simulation strategy training and coaching. Andy is a member of the NAFEMS UK Steering Committee, Assess Business Theme Committee, and the Simulation Governance and Management Working Group .
Authored & Presented By Bernd Fachbach (Fachbach-Consulting)
DescriptionVirtualization of engineering has the potential to address strategic goals like reduction of time-to-market, managing an increasing product complexity and variety, or raising the flexibility to react on the market. Companies can realize it at various levels. But how shall a successful transformation look like?The workshop aims to provide an impulse for successful engineering virtualization and to identify potentials and fields of activity by analyzing and discussing industry examples with the participants. It will cover key aspects such as strategic goals, boundary conditions, process, organization, technology, data, or mindset. Depending on the discussion, experience-based recommendations will be added to the conclusion.
Authored & Presented By Patrick Morelle (Consultant)
DescriptionIn this short course, we’ll introduce theoretical, numerical, and methodological background which will allow you to build your first Multibody Dynamic (MBD) models, and then progress to more complex ones. Examples are discussed in detail allowing you to understand not only model construction but how and why a given model has been built that way. They illustrate the use of multiple commercial MBD software packages as concrete examples of applications of existing commercial technology. The end of the course is devoted to some of the many ways we can introduce more realism in MBD models so to cover aspects like mechatronics, modeling of gear boxes or take into account nonlinear flexible components and their influence on the behavior of the models.
Authored & Presented By Sean Teller (Veryst Engineering)
DescriptionJoin Dr. Sean Teller for a short course on polymer testing and modeling for solid mechanics simulations. He will present on the mechanical behavior of polymers, including time- and temperature- dependence, as well as the advanced test methods used to accurately capture the mechanical behavior. He will also discuss constitutive models available in commercial codes, including linear viscoelasticity and non-linear viscoelastic/viscoplastic material models. Last he will review methods to use these in your simulations, and the importance of a well-designed validation test.
DescriptionThis short course provides a brief overview of the full explicit dynamics course that is structured according to a simulation set-up, guiding the engineer through the solution steps and decisions in carrying out an explicit dynamic analysis. The theoretical nature together with its software implementation and advantages and disadvantages are discussed to help engineers carry out explicit dynamic simulations, ensuring accurate and robust solutions with correct analysis choices avoiding possible typical pitfalls. https://www.nafems.org/training/courses/10-steps-to-successful-explicit-dynamic-analysis/.
Authored & Presented By József Nagy (eulerian-solutions)
DescriptionIn this source participants will learn about the basics of Multiphysics simulations in the fields of Conjugate heat transfer simulations as well as Fluid Structure Interaction simulations. After the presentation of the basic ideas as well as the coupling methods participants will be shown several pitfalls they should avoid when setting up multiphysics simulations.1. Introduction * Computational Fluid Dynamics * Thermal * Structural Mechanics2. Multiphysics - Conjugate heat transfer * Boundary condition * Setup * Example3. Multiphysics - Fluid Structure Interaction * Coupling methods * Setup * Example4. Pitfalls * CFD * Structural mechanics * CHT * FSI
Authored & Presented By Łukasz Skotny (Enterfea)
DescriptionIt seems there is a spectrum in FEA world. On one end, you can learn mathematics related to FEA on an expert level, and on the other end you can learn to use the software with most sophisticated options. But for me, it's a triangle, and at the top of it is the actual understanding on how things work, and what you're trying to achieve. This seems to be often omitted in how we teach FEA (and engineering in general).In this workshop I want to dive deeper into this idea, and have a discussion about this.
Authored & Presented By Klemens Rother (University of Applied Sciences Munich)
DescriptionThe FKM guideline is a standard developed by the Forschungskuratorium Maschinenbau (Research Committee for Mechanical Engineering) – FKM - for static and cyclic strength verification. Due to its broad applicability, the strength verification has become widely used in mechanical engineering and other industries.• Evolution of the guidelines• Benefits of the different concepts• Overview about the procedures• Postprocessing of results from FEA• Selected examples
DescriptionThis short course introduces key simulation VVUQ* concepts and methodologies aimed at building simulation credibility, and in line with current standards.• Simulation process and role in decision-making• Simulation management• Verification: definition, code verification, solution verification• Validation: definition, hierarchical validation, validation metrics• Predictive Capability• Uncertainty Quantification• Standards and guides from ASME, NASA, NAFEMS...Finally, benefits for industrial organizations, implementation issues, and recommended practices are highlighted.(*) Verification, Validation and Uncertainty Quantification
DescriptionEngineering Simulation is a crucial capability for developing, refining and optimizing products. It is therefore, vital that organisations engineering simulation capability is fast and efficient, but can also be trusted to provide reliable results to enable important engineering decisions to be made with confidence. This is especially true with the increased use of AI techniques, as simulation is a critical source of trustworthy data used to train AI models. Is your engineering simulation capability ready?Engineering Simulation is like a complex machine. It depends on many components working together in harmony to be effective, efficient, and reliable. To assess the status of their Engineering Simulation, organisations can benefit from conducting a Maturity Assessment. This enables them to identify their strengths, but importantly to identify areas to target improvement across.This mini workshop will present an approach and framework that organisations can use to assess the status of their organisations engineering simulation capability, addressing the following topics:• Organisation Goals• Core Components of Engineering Simulation• Simulation Maturity Assessment Types• Approach to conducting a Maturity Assessment• An Example Maturity Framework• Some Practicalities• Taking Action• The Benefits of Conducting a Maturity AssessmentShare your experiences and learn from others via interactive mini surveys and workshop discussions:• Your goals and challenges• Your experiences and lessons learnt• Benefits of maturity assessmentAndy is a Fellow of the Institution of Mechanical Engineers. He has over 30 yrs experience in Automotive Engineering at Jaguar Land Rover, with 20 years at Senior Management level, including 10 years as Head of Engineering Simulation.In 2019 formed PHRONESIM Ltd to provide consulting in all aspects of Engineering Simulation Strategy helping multinational organisations with; simulation capability maturity assessment, simulation strategy development, improvement roadmaps, and simulation strategy training and coaching.Andy is a member of the NAFEMS UK Steering Committee, Assess Business Theme, and the Simulation Governance and Management Working Group .
Authored & Presented By Muhammad Saeed (Arena2036)
DescriptionIntroduction (10 min.)Peter Froeschle (CEO ARENA2036) DigiTain - Digitalization for Sustainability (40 min.)Processes, Methods and Models for the Fully Digital Product Development of Sustainable Electric Drive Architectures • Digital Methods & Models - Overview of Applied and Developed Methods (20 min) Dr. - Ing Karsten Keller (University of Stuttgart - ISD) • Digital Certification - Status of Homologation / Certification by Analysis for Crashworthiness (20 min) Prof. Dr. -Ing André Haufe (Dynamore / Ansys)AI Act & TEF AI Matters Services (40 min.)Muhammad Saeed (ARENA2036)Key takeaways include: • Understanding the AI Act's impact on engineering and simulation workflows • Exploring TEF AI Matters services for AI validation and deployment • Engaging in discussions on AI compliance, governance, and risk mitigationThis workshop is ideal for engineers, researchers, and decision-makers looking to align their AI-driven innovations with European regulations while benefiting from TEF AI's expertise.
DescriptionIn this short course, we’ll explains the basic ideas of a methodology to debug linear and nonlinear implicit finite element models efficiently and systematically. We shall explain how to interpret issues most frequently encountered when developing finite element models for static analysis and understand the associated error messages. We will explore the range of debugging tools offered by finite element software and learn how to best use them to find a solution. Many error messages are self-explanatory, but the most concerning issues with your models can lead to error messages that are much less clear and difficult to correct. The relation between the message and its cause is not always obvious and requires significant effort to be addressed. We shall introduce during this short course the basics about how to manage these situations. We will explore the different causes of numerical difficulties, propose a methodology to speed up the debugging process, and finally reach solver convergence to obtain the expected results from your models. As a result of this course, you will also be able to efficiently prepare a discussion with your favorite software hotline to speed up the problem-solving process and get results from your models.
DescriptionAIAA has recently published a recommended practice for code verification for CFD, R-141-2021, and is now working on the publication of a full new international standard. This standard will be the first international standard for code verification (to the authors knowledge) in any field of engineering simulation and, as such, it represents an important step forward in terms of how to achieve confidence in the simulation tools that the NAFEMS membership uses.Of key importance is the concept of the observed order of accuracy. Code verification requires the error in a simulation to be quantitatively evaluated, which by implication requires the solution to the simulation to be known. The error is calculated on a series of progressively refined meshes so that the trend in the error with the spatial discretization can be determined. Specifically, this trend is plotted on a log log graph and the observed trend should conform to a straight line, the gradient of which is the observed order of accuracy. This should then be compared with the formal order of accuracy claimed by the developer of the code. NAFEMS SGM and CFD working groups have been working on some code verification exemplars to demonstrate the processes set out by AIAA: specifically, they are real cases using real commercial software tools where the observed order of accuracy is found not to correspond to the formal order claimed by the software developer. In each case, the reason for the discrepancy is explored and identified, so that suitable modifications to how the software tool is used can be made, and the tool can be used with confidence. The exemplars demonstrate the essence of the new standard and why considering the observed order of accuracy is important. Whilst the focus of the standard is on the performance of the simulation solver, as the exemplars show, considering the observed order of accuracy can also identify potential issues with the pre and post processing steps.
DescriptionMany problems facing engineers are nonlinear in nature, where the response of a structure may have large or even permanently deformations, loads and constraints may change, components may interact with each other, etc. This short course overviews these aspects providing a background as to how these effects can be simulated, considering material, geometric and boundary nonlinearities. Additionally, comparisons between linear to nonlinear analysis features and methods are made to enable users make the correct decisions on selecting nonlinear analyses and to understand the increase in computer resources required.https://www.nafems.org/training/courses/practical-introduction-to-non-linear-finite-element-analysis-fea/.
Authored & Presented By Laurence Marks (Laurence Marks Consultant)
DescriptionThis course is intended to be the first resource as you use to grow your knowledge of the SPH method. It will help you understand what an SPH model is and how it differs from other solution technologies which can be used in the same or similar scenarios. It will introduce you to terminologies and workflows, and give you an awareness of the advantages and disadvantages of the technology. Example applications will be presented, as will where to find more in depth resources and code. Laurence MarksLaurence started his engineering career as an apprentice at the Ministry of Defence in the early 1980’s. The MOD then sponsored him to do a degree at Oxford Polytechnic, and it was at this time that he built his first finite element models. Since then his career has been concerned with selling, supporting, training and above all using, Finite Element and CFD solutions. He was founder and CEO of SSA, which became part of Technia in 2018, and has published numerous papers in the world of life sciences, where he continues to undertake research projects. He also writes regularly for Develop3D and Benchmark magazines.
DescriptionAutomation is a very strong trend in engineering and FEA calculations. We try to automate as many things as possible, simply to make them easier/faster/cheaper to do.But can engineering design be really automated, and to what extent this is possible? Also, what would be the consequences of this? Or to put it differently, is engineering an art, or a procedure?In this workshop we will dive deep into this issue, and have a discussion about this.
Authored & Presented By Alfred Svobodnik (Mvoid Group)
DescriptionMembers will present and discuss current activities of the MPWG. There will be members onsite, while a couple of us will be joining online.Starting with an introduction of the working group, we will present and discuss topics like next events (e.g., the Multiphysics Conference in 2026) and publications (e.g., The NAFEMS Journal of Multiphysics Case Studies – Volume 2). Additionally, we will discuss with the audience the future of Multiphysics simulation.
Authored & Presented By Jack Castro (The Boeing Company)
DescriptionAs the centerpiece of NASA's Sustainable Flight Demonstrator (SFD) project, the X-66 flight demonstrator aims to gather critical learnings that will inform future airplane programs, ultimately guiding the industry towards sustainable aviation solutions. This experimental program marks the first full-scale, flying demonstrator of a new commercial architecture since the 707 prototype in 1955. The most iconic feature of the demonstrator will be its high-aspect ratio truss-braced wing which is attached to an MD-90 fuselage. However, this novel configuration presents numerous design, engineering and analysis challenges that must be addressed prior to flight. This presentation will delve into both the obvious and the less apparent engineering hurdles posed by the X-66's unique design. We will showcase the essential role of advanced engineering analysis and simulation in overcoming these challenges as we transform this innovative concept into a flying demonstrator.
BiographyJack Castro is a Technical Fellow responsible for Boeing’s enterprise structural FEA capability and strategy in the context of Boeing’s digital model-based processes. In this capacity, Jack is working with Boeing internal teams and software suppliers to develop and deploy modernized processes, especially in the Loads value stream. Jack is also the enterprise FEA Design Practice Leader, Enterprise Aeroelastic Loads Focal and co-author/editor of Boeing’s Book 5-- Finite Element Analysis of Aerospace Structures. In his day-to-day activities, Jack provides aircraft program consulting on finite element analysis methods, analysis planning, model development, verification and validation for the stress and loads organizations. In his leadership facing role, Jack works across the ecosystem of FEA suppliers to help define vendor strategy associated with Boeing strategic needs. Prior to Boeing, Jack worked at MSC Software for 25 years in positions ranging from Nastran QA Engineer to Boeing Global Technical Program Manager. During the majority of this time, he worked in customer facing roles providing on-site consulting and leadership in the usage, collaborative development and deployment of MSC Nastran based solutions in the domains of stress analysis, dynamics, internal/external/aeroelastic loads and structural optimization. Jack is a graduate of University of California, Los Angeles with a Bachelor’s Degree in Mechanical Engineering and Master’s Degree in Civil Engineering.
Authored & Presented By Stephanie Bailey-Wood (Dassault Systemes)
Authored & Presented By Karlo Seleš (Rimac Technology)
DescriptionRimac Technology (RT), formerly the Components Engineering division of Rimac Automobili—founded in 2009—has established itself as an important player in advanced performance electrification technologies. Meanwhile, the Rimac brand has evolved from creating the world’s first all-electric, record-breaking production hypercar to continually pushing the boundaries of aesthetics and dynamics through its Bugatti-Rimac enterprise.Today, RT stands as a leading Tier-1 automotive supplier, specializing in high-performance battery systems, electric drive units, electronic systems, and user interface components, solidifying its reputation in advanced performance electrification technologies.As the company transitioned beyond its startup phase, the simulation department expanded in parallel, offering a unique opportunity to challenge and rethink conventional industry practices. One such area of focus is the evolving role of simulation engineers in the product development process.Amid the rapidly evolving trends within the simulation community, this presentation aims to spark a thought-provoking discussion on the transforming role of industrial simulation engineers in the modern product development. While the simulation engineering has traditionally been viewed as a supporting function, its strategic importance in the design and development process within Rimac Technology is becoming increasingly apparent. This presentation will explore how Rimac Technology leverages simulation techniques to address challenges inherent in fast-paced, cost-sensitive industries. It will showcase the critical role simulations play throughout the development cycle—starting from initial concept ideation, where incremental improvements often fall short, to the optimization and validation stages that culminate in production-ready solutions.By delving into Rimac Technology’s approach, the session will highlight how simulations can be used effectively at different product development stages. Moreover, it will consider how the responsibilities of simulation engineers are expanding beyond traditional analysis tasks to encompass broader topics, such as influencing design strategies, driving various levels of verification and validation campaigns, and integrating sub-system requirement considerations into engineering decisions.Ultimately, this presentation seeks to challenge conventional perceptions, illustrating how simulation engineers are emerging as key contributors to an organization’s success in an increasingly competitive and technology-driven landscape.
BiographyKarlo Seleš is an experienced structural analysis (CAE/FEA) engineer with a strong sense of responsibility. He utilizes a proactive mindset, resourcefulness and social intelligence paired with a strong academic background, under-the-hood simulation software knowledge which comes from solver development, and rapid-pace automotive industry experience to succeed in daily work challenges.
Presented By Jan Paul Stein (McKinsey Company)
Authored By Jan Paul Stein (McKinsey Company)Alessandro Faure Ragani, Jan Paul Stein (McKinsey Company)
17:00
Authored & Presented By Christian Heinrich (Boeing Research Technology Europe)
AbstractThe aim of the presented work is to extend the building block approach as currently used in certification of aircraft structures and apply and combine it with aerodynamic aspects. The goal of this work is to reduce physical test effort and shorten development time while increasing the required degree of confidence into the performance of the aircraft. Aeroelastic effects are considered to enhance the predictive level of loads by using full fluid structure interaction. Fluid-structure-interaction is considered on certification grade structural finite element models to include the changes of loads with the changes in displacement.In the development of new aircrafts, the structural building block approach and structural test pyramid is commonly employed and part of the acceptable means of compliance with the regulators on the structural side. The same approach is used here to derive more accurate values for the aerodynamic loads applied to the aircraft. To that extend the verification and validation procedures are executed along the levels of the aerodynamic test pyramid. At the bottom, coupon level simulations on 2D profiles are done that serve as verification cases. Simple closed form solutions are compared with numerical results to establish correct implementation of the mathematical equations into the computer code. These are flat plate and simple profiles solutions for laminar and turbulent flow. Also, aeroelastic closed form solutions are used. Stretching the profile in spanwise direction will be used for the validation in the element level. This closes the non-specific simulation on the aerodynamic side.The next levels serve for validation with increasing complexity based on real geometries for later application. Here the simulation results are compared to the experimental results of a 3D wing geometry under turbulent flow as well as fluid flow around high lift devices (detail level). Also, fluid-structure-interaction of vortex shedding around a cylinder and elastic member is considered.Finally, the aeroelastic deformations are presented at the component level of the test pyramid. Here a composite wing structure is deforming under aerodynamic loads and thus altering the flow characteristics. Considering the interaction between fluid and structure and evaluating loads based on the deformed structure is expected to improve the quality of the numerical predictions. Numerical and experimental results are compared with each other to complete the validation process. By consistently applying verification and validation from the bottom to the top of the structure and aerodynamic test pyramid, confidence can be placed in the predictive capability of the models and ultimately a reduction in test efforts can be attempted.In comparison to other fluid-structure-interaction investigations, the present analysis is focused on large deformation and deflection. The methodology is presented at the example of the outer wing of a glider aircraft which shows a tip deflection of 1.5 meters over a 5 m wing span. Secondly, no simplification of the structural model is performed and the structural model with the resulting load can be used as is for structural certification. The implications of large deformation analysis and no structural simplification on the analysis effort will be highlighted and resulting complications and solution presented.
Authored & Presented By Christopher Woll (GNS Systems GmbH)
AbstractIn engineering, the implementation of artificial intelligence methods can be a useful tool to break the boundaries of data silos and efficiently process data in the field of simulation-based design. In addition to implementing comprehensible and efficient data management systems, identifying and capturing the relationships between the respective engineering data elements provides usable context that sustainably enriches the data used. Contextualization makes the data highly structured, making it ideal for feeding into AI models to gain deeper insights, identify patterns and support informed decision-making.This is also the idea behind the concept of the Data Context Hub (DCH): which was developed as a research project over 6 years at Virtual Vehicle Research (Graz) together with automotive and rail OEMs. The platform brings together information from R&D and manufacturing data sources as well as from telemetry data streams or storage locations like data lakes. The DCH then creates an explorable context map in the shape of knowledge graphs from area-specific data models. These are essential for streamlining processes, reducing risks and identifying new opportunities in the data-driven development of innovative products. The use of state-of-the-art AI models also supports developers in gaining deeper insights from the data, predicting trends and automating tasks. In the joint presentation by Context64.ai and GNS Systems, the experts will present the findings from applied research on contextual graph databases. Using two practical examples from the automotive and manufacturing industries, the experts will demonstrate how Data Context Hub helps companies transform complex data into clear, actionable insights. Use case 1 concerns buildability studies in the automotive industry, where simulation times have been significantly reduced. Due to the high complexity of the product configurations, an exponential number of possible combinations and the associated slow development cycles, conventional tools and methods proved insufficient for the company. The Data Context Hub's AI-based platform served as a virtual buildability checker by providing a solution specifically for capturing, simplifying and querying complex configuration dependencies. The implemented solution presents the configuration rules to the user as nodes in a directed graph structure that can map the relationships and dependencies between different components. By consistently capturing dependencies between assembly constraints, configuration rules and special manufacturing processes, far-reaching insights can be gained that give engineers in the production process of innovative vehicles, for example, conclusions about the impairment of the rear axle by a selected battery pack. This approach allows a faster and more precise determination of the configurations that can be built, thereby shortening development times.In a second use case, it becomes clear that the platform optimally transforms data-intensive AI applications by creating a structured, context-aware basis for information retrieval and decision-making. The use of Large Language Models (LLM) as a chatbot provides precise, relevant answers to specific user queries about certain data contexts in technical use cases. Like the currently best-known ChatGPT, the chatbot uses language queries to independently generate content in the form of correlations between the data. For example, the chatbot would respond to a request for a summary of all simulations carried out with a specific material in the last month with an interpretation of the underlying knowledge graphs. To do this, the trained model uses the internal “Memory for Your AI” (M4AI) module to search through the linked data relationships and find relevant information based on semantic similarity. This works as follows: In the first phase, a defined data model searches a large dataset for relevant information. In the following phase, retrieval augmented generation models (RAG) are used to optimize the initial search. The model then identifies relevant information based on semantic similarity and not by searching for exact matches. The information obtained in this way flows into the subsequent generation process. This enables users to better understand the answers provided and make decisions more quickly. The concept behind the Data Context Hub combines the ability to retrieve information from a large database with the generative capabilities of state-of-the-art AI technologies such as LLMs in a pioneering way in the field of simulation-driven design.
Biography- Christopher Woll completed his studies with Diplom-Ingenieur (BA) for Information Technology and Master of Science in Computational Sciences in Engineering - Working for GNS Systems since 2004 - Subject area: HPC specialist and CAE-IT consultant, expert for innovative solutions for virtual product development - Extensive experience with well-known international customers from the industrial manufacturing, automotive and life science - Current position: Christopher Woll has been managing director of GNS Systems GmbH since 2015.
Authored & Presented By Milan Tasic (Airbus Operations)
AbstractHistorically the plastic bending was analyzed using the integration of the non linear stresses on the critical section. In this method, the stress-strain curve is simplified using the Ramberg Osgood, or the Cossone method, See Reference [1]. Today, due to modeling democratization, a lot of structures are analyzed using the detailed finite element models. This method can have good results in accurately predicting stresses in the linear domain but it can lead to inaccuracies if the wrong material representation or element type is used. Furthermore a special attention should also be given to manufacturing tolerances that also impact the strength of the parts. This paper attempts to address these issues by comparing two cases where DFEM results, the tests, and the analytical methods were compared. First case represents a double shear lug joint. A detailed 3D finite element model was created in order to analyze a force required to close a gap between female and male lugs. These results were compared with a test and an analytical method, based on reference [1]. The detailed finite element model and the analytical method show a good correlation, but the test results vary, due to manufacturing tolerances and the inability to accurately measure preload at the lugs. This case shows a correct modeling technique in respect to element type and material, but it also shows that an analytical method is accurate as well. A further improvement could be made by taking into account the manufacturing tolerances and having a better instrumentation for the preloads. The second case represents a beam with a sideways loading. The beam was initially modeled with 2D elements, but this modeling was too conservative as the capacity predicted by the test was higher. Possible reasons for this discrepancy include inability of the 2D FEM to predict the ultimate capacity of the material especially in the web to flange radius area. 3D modeling showed good test correlation. The result from the test and from 3D modeling were used to develop an analytical method for this analysis that can be applied to similar cases. References [1] E.F. Bruhn “Analysis and design of the flight vehicle structures” 1973 Chapter C3
Authored & Presented By David Wieland (Southwest Research Institute)
AbstractAs the US military extends the service life of aging weapon systems, the need for accurate models of aircraft structures has become critical. The original finite element models (FEMs) used during the development of these legacy systems were either not procured by the military or not maintained, necessitating the development of new FEMs from 2D drawings and scanned parts. Given that many weapon systems were tested up to 60 years ago, the absence of original test data presents significant challenges for model validation. This has led to the United States Air Force performing full scale test for model validation.This presentation delves into Southwest Research Institute recent experiences in performing full-scale validation test for the T-38 and A-10 aircraft finite element models. The full aircraft NASTRAN structural analysis model of the T-38 is being developed by Northrop Grumman. The A-10 aircraft structural analysis model is being developed by the United States Airforce A-10 analysis section with support from Southwest Research Institute. To performing these full-scale validation test, a diverse array of measurement techniques, including deflection potentiometers, strain gages, fiber optic strain sensors, and digital image correlation, were employed. Depending on the component the validation tests often are performed on structure that will be returned to active service. This necessitates extra caution to ensure the structure is not damaged and can limit options for methods to attach to and load the structure. The presentation will discuss the specific test setups, the resulting data, status of the validation effort and key lessons learned from the validation process.In addition to these validation efforts, SwRI will discuss how the validated T-38 finite element analysis model is being used to update the damage tolerance analysis of the -33 wing. This will include stress-to-load equation development, development of stress sequence, damage tolerance analysis to determine crack growth progression and failure.
Presented By Wolfgang Wagner (Virtual Vehicle Research)
Authored By Wolfgang Wagner (Virtual Vehicle Research)Karlheinz Kunter (Virtual Vehicle Research Center Graz) Miguel Moldes Carballal (CTAG) Vanessa Ventosinos Louzao (CTAG) Patryk Nossol (Fraunhofer-Institut fr Werkzeugmaschinen und Umformtechnik IWU) Jorge Velasco (Cidaut Foundation) Victor Garcia Santamaria (Applus+ IDIADA) Mario Perez (Applus+ IDIADA) Matteo Basso (CRF) Mohab Elmarakbi (Northumbria University) Ahmed Elmarakbi (Northumbria University)
AbstractRoad safety has increased steadily in recent decades. This is particularly evident from the number of serious accidents involving fatalities and injuries with long-term consequences. Nevertheless, further efforts must be made to achieve the ambitious goals of the EU's "Vision Zero" road safety initiative. The EU project SALIENT is intended to make an important contribution to the implementation of this initiative, particularly by considering the aspect of front-dominated crash load cases of vehicles of different classes (compatibility). In SALIENT various concepts for the redesign of front-end structures using innovative materials and manufacturing technologies are developed to increase crash safety while simultaneously reducing the weight of the components. In addition to a passive basic concept, innovative approaches based on active components are also being investigated. By means of ADAS sensors, the opposing vehicle can be identified according to certain vehicle classes such as small cars, trucks or SUVs. Furthermore, information is available about the opposing vehicle and the anticipated crash situation, such as velocity, impact angle and crash overlap. Based on these data, the stiffness of the load paths can then be specifically adapted to reduce the severity of the accident. One of these concepts is based on a fiber-reinforced crash box with embedded shape memory alloy (SMA) layer, which allows a stiffening of the crash box when activated.In this paper, this approach as well as the passive baseline concept are presented, virtually analyzed and evaluated for their effectiveness in several scenarios. In the first step the concepts are extensively validated at component level, supported by experimental data and finite element (FE) models. The understanding and verification of this behavior at component level provides the basis for the development of assessment methodologies at full vehicle level, which can only be investigated virtually due to their high costs.Alongside standard Euro NCAP crash load cases (FWRB, MPDB), additional scenarios are also considered that will become relevant in a future mixed traffic with autonomous and non-autonomous vehicles. The findings highlight the potential benefits of integrating advanced active systems into future front-end structure designs.
Presented By Arnab Ghosh (CelSian Glass Solar bv)
Authored By Arnab Ghosh (CelSian Glass Solar bv)Johan van der Dennen (CelSian)
AbstractThe production of high-quality glass demands precise control over furnace parameters, raw material inputs, and a thorough assessment of the final glass product. Quality and defect levels are crucial factors influencing efficiency, cost, waste, and sustainability in glass manufacturing. Given the multitude of furnace parameters, often of the order 100, determining quality through numerical simulation alone remains a challenging problem. Adding to this complexity is the time delay between the formation of molten glass and the detection of defects in the final product. To achieve good quality glass, the residence time of the glass in the furnace needs to be at least 8-12 hours. This residence time is not constant and highly depends on the process parameters. Changes made to the inputs will affect the quality of the final product at a much later stage. A tool that predicts the upcoming glass quality as a function of current and previous inputs supports optimal furnace performance. CelSian addresses these challenges by integrating machine learning with furnace simulation to predict glass quality based on process parameters and user input variables. Delfos, powered by CelSian’s CFD simulation package GTM-X, simulates furnace dynamics from user inputs, capturing changes in parametric values over time. By training an AI model on a large dataset, Delfos predicts defect counts over time. This approach combines the physics-driven simulation of GTM-X with the predictive power of AI, offering a novel pathway for proactive quality control in glass production. An important aspect of the development is a parallel, governmental-funded, project to speed up the CFD code significantly. This is done through AI-enforced solvers and usage of GPU’s. The currently ongoing research projects for glass quality prediction show promising results. This presentation will show some of the results achieved and a forecast for future improvements. Also, the implementation of AI to speed up the CFD is briefly discussed.
Presented By Tony Porsch (Volkswagen)
Authored By Tony Porsch (Volkswagen)Karl Heinz Kunter (Virtual Vehicle Research GmbH) Jean-Daniel Martinez (Audi AG)
AbstractPredicting structural failure in automotive engineering remains a significant challenge in the area of virtual vehicle development, gaining further importance in the context of "Virtual Certification." The increasing use of modern lightweight materials, ultra-high-strength steels, and new innovative joining techniques contributes to heightened material diversity and complexity in vehicle bodies. Traditional resistance spotwelds are now complemented by growing use of self piercing rivets, line welds, and flow drill screws among other techniques. The failure of these connections is of particular concern in crash scenarios, as it significantly impacts vehicle safety. Therefore, robust and industry-applicable computational methods are essential for dealing with the complexity of vehicle structures and delivering reliable predictive results.In this seminar, the L2-Tool, a modular failure assessment framework, which was developed at the Virtual Vehicle Research Center in a joint research project with Volkswagen AG and Audi AG will be presented. The key element of this framework is the assessment of the failure with special surrogate models, which guarantee a high prediction quality despite a low additional computing time. Particularly high-strength lightweight materials have an increased risk of crack initiation under plane tensile load, for example due to the heat input in the welding process or due to the notch effect on rivets and flow drill screws. A key element of this method is that these two types of failure can be distinguished and assessed using a non-local approach. For the parameterization of the failure models, a combination of real and virtual testing with detailed, small-scale specimens is used, which will be briefly outlined in the presentation. After the development phase, the failure models are integrated into the product development process in a multi-stage integration process, starting with implementation via user interfaces, followed by an comprehensive test phase and the final industrialization by the crash solver provider.In the conclusion of the presentation, illustrative results of the L2-Tool applied to vehicle substructures are presented. The framework within the standardized calculation process is also described, with emphasis on the pre- and post-processing phases. The predictive accuracy of the method is addressed, and finally, potential applications are shown.
Presented By Marco Turchetto (Esteco SPA)
Authored By Marco Turchetto (Esteco SPA)Alessandro Viola (ESTECO SpA) Tobias Gloesslein (ESTECO Software Gmbh)
AbstractManufacturing companies face growing product complexity. They need higher levels of simulation in product development. Maintaining simulation results and product data synchronization is essential to shorten design cycles and increase throughput. For many companies, this is still a challenge. There are fragmented design processes across disconnected Product Lifecycle Management (PLM), Simulation Process and Data Management (SPDM) and Multidisciplinary Design Optimization (MDO) environments which run on different applications, leading to siloed datasets. The lack of interoperability between various CAD, CAE and process automation and design optimization authoring tools and data management systems often results in data duplication, loss of traceability and inconsistent user experience. This prevents valuable analysis and simulation information from being maintained and used throughout products’ lifecycle. A loss of knowledge and experience in simulation-driven design, amplified by a globally distributed workforce, creates an additional layer of difficulty. This necessitates capturing and transferring knowledge through democratization and automation of engineering workflows. Every stakeholder, from CAD designers to CAE analysts, should participate in the simulation process and seamlessly execute simulation workflows to find trusted designs early in the product development. In this paper, we present a state-of-the-art federated approach to managing and automating design, simulation and optimization processes through the digital thread, while preserving the relationships between teams, their processes, tools, and data that contribute to improving product performance. This happens by connecting multiple PLM and SPDM systems with an open digital engineering platform. What if this platform can act as a key enabler to orchestrate and automate diverse CAD/CAE authoritative sources of data and models into a ready-to-use simulation workflow to collaboratively perform system-level MDO? We demonstrate this innovative approach by taking the simulation-driven design process of a crankshaft, one of the most important parts of an engine’s power transmission system, as a reference. Cross-functional engineering teams such as CAD designers, CAE analysts for structural and modal analysis, as well as simulation experts and other stakeholders can all benefit from an agnostic web-based framework to convey and automate various CAD/CAE models (stored in different PLM, SPDM and other data management systems) into a single executable simulation workflow. This allows them to seamlessly conduct further design optimization analyses with the goal of finding the best compromise between the crankshaft weight and deformation caused by the load and its maximum stress.
Presented By Marcus Stegemann (Fraunhofer IGD)
Authored By Marcus Stegemann (Fraunhofer IGD)Daniel Weber (Fraunhofer IGD) Johannes Mueller-Roemer (Fraunhofer IGD) Daniel Stroeter (TU Darmstadt)
AbstractToday, engineering processes rely on structural analysis using computer-aided design (CAD). This typically involves discretizing the geometry to apply the finite element method (FEM) solving the partial differential equations (PDEs) of elasticity. The accuracy of the FEM depends on the resolution of the discretization. However, a high-resolution typically leads to slower run time performance, because each element costs computationally. Using a CAD geometry with specified load cases, computing the elasticity PDE requires multiple steps, each of which can become a bottleneck if executed on the CPU. For fast and automated computation, we suggest a GPU-accelerated adaptive simulation pipeline for structural analysis. Due to their capabilities in representing complex geometries and facilitating robust local adaptivity, unstructured tetrahedral meshes are a well-suited underlying structure for mesh adaptation. Since previous work presented fast simulation (Weber et al. [1]), massively parallel optimization and remeshing of unstructured tetrahedral meshes (Ströter et al. [2], [3]), and data structures for massively parallel matrix assembly algorithms (Mueller-Roemer [4]), this work focuses on a-posteriori adaptive mesh refinement of discretized models. This closes a remaining gap, with a fully automated GPU-accelerated adaptive structural analysis for CAD models at the horizon. Our method achieves a speedup of 2× to 10× compared to the open-source mesh adaptor MMG [5] for tetrahedral meshes. By shifting the bottleneck away from mesh adaptation, the overall computation time of certain structural analysis tasks can be reduced by half. It utilizes the GPU for error-estimation and sizing-field-processing. As a result, the proportion of these steps in the overall runtime is negligible. With a demand-oriented adaptation and little data transfer between CPU and GPU, we achieved fast mesh adaptation to a sizing-field. In combination with the fast structural analysis by Weber et al. [1], our pipeline quickly determines structural analysis results close to the so-called mesh-independent solution without laborious manual intervention.References[1] D. Weber, T. Grasser, J. Mueller-Roemer and A. Stork, "Rapid Interactive Structural Analysis," 2020. [2] D. Ströter, J. S. Mueller-Roemer, D. Weber and D. W. Fellner, "Fast harmonic tetrahedral mesh optimization," The Visual Computer, vol. 38, p. 3419–3433, June 2022. [3] D. Stroeter, A. Stork and D. Fellner, "Massively Parallel Adaptive Collapsing of Edges for Unstructured Tetrahedral Meshes," 2023. [4] J. S. Mueller-Roemer, "GPU Data Structures and Code Generation for Modeling, Simulation, and Visualization," TUprints, 2020.[5] C. Dobrzynski, "MMG3D: User Guide. [Technical Report] RT-0422, INRIA ⟨hal-00681813⟩," 2012.
17:20
Authored & Presented By Stephen Cook (Northrop Grumman)
AbstractThe aircraft certification process for both civil and military air systems carries the reputation of being a costly, paper-centric process [1]. Applicants seeking to achieve certification must provide copious amounts of data and test evidence to establish the engineering pedigree of the aircraft. One of the promises of digital engineering is the use of high-fidelity engineering models as a superior source of data for authorities to find compliance with airworthiness regulations. This approach uses engineering simulation models as the authoritative source of truth for making airworthiness determinations and risk assessments. However, there are practical obstacles to real adoption of model-based aircraft certification. This paper will detail these challenges to achieving model-based aircraft certification – and propose ways to overcome them - in four categories:Culture: Travel by aircraft is one of the safest forms of transportation, in part due to rigorous airworthiness standards and processes. As a result, the aircraft certification culture is reluctant to change. Pathfinder projects that have been formulated to show the value of model-based aircraft certification. The paper will propose next steps to develop a positive certification culture around use of models in the certification process.Competency: The rapid onset of digital engineering tools has created a specialized skillset around the design, construction, and format of the model and its corresponding data. Recently a European aircraft industry consortium stated that there is a need to increase “awareness, trust, skills, knowledge, training, experience and mindsets” among engineers using models in the certification process [2]. The paper will discuss some of the airworthiness credentialing efforts underway and the potential to develop training tailored for model-based aircraft certification.Collaboration: The current aircraft certification involves generating data and sending the results to the airworthiness authority to be reviewed at another time. In contrast, digital models offer the possibility of collaborating in the model in real time and conducting the showing and finding of compliance simultaneously. The full paper will discuss some of the obstacles that must be overcome to enable collaboration in the model, to include availability of regulatory personnel, configuration control of the model, and the ability of models to accurately simulate failure conditions. The paper will also explore the possibility of augmenting collaboration with artificial intelligence to assist the showing and finding of compliance.Credibility: The aircraft certification process moves at the speed of trust. A recent guide to certification by analysis (CbA) stated that “developing methods to ensure credible simulation results is critically important for regulatory acceptance of CbA.” [3]. For engineers to trust models as the authoritative source of truth will require ways show the credibility of the models through appropriate processes and metrics, which will be discussed in the paper.The paper will provide recommendations for near-term steps that the community can take to promote progress in each of these four areas. Finally, the paper will identify areas where additional research and pathfinder programs would be valuable to enable model-based aircraft certification.References:1) Jiacheng Xie, Imon Chakraborty, Simon I. Briceno and Dimitri N. Mavris. "Development of a Certification Module for Early Aircraft Design," AIAA 2019-3576. AIAA Aviation 2019 Forum. June 2019.2) Fabio Vetrano, et al., “Recommendations on Increased Use of Modelling and Simulation for Certification / Qualification in Aerospace Industry,” AIAA-2024-1625.3) Timothy Mauery, et al., “A Guide for Aircraft Certification by Analysis, NASA/CR-20210015404, May 2021.
BiographyDr. Stephen Cook is the Northrop Grumman Fellow in Airworthiness, responsible for developing and implementing airworthiness policy and strategy across Northrop Grumman’s portfolio of manned and unmanned aircraft. Within the broader aircraft certification community, Dr. Cook leads standards development efforts in multiple consensus standards bodies including the Aerospace Industries Association, ASTM, and the International Civil Aviation Organization. He currently chairs the Airworthiness Subcommittee for ASTM Committee F38 on Unmanned Aircraft Systems and leads the task group revising National Aerospace Standard 9945 “Airworthiness Engineering Training and Education” that is providing guidelines and curricula for airworthiness professional development. His efforts in training and education resulted in Embry-Riddle Aeronautical University establishing a postgraduate degree program in airworthiness in 2019, the first in the United States. Dr. Cook was inducted as an Associate Fellow of AIAA in 2016, was awarded the Engineers’ Council Outstanding Engineering Merit Achievement Award in 2018 and received the ASTM Award of Appreciation in 2019.
Authored & Presented By Maria Bonner (Siemens)
AbstractIn the dynamic field of Computational Fluid Dynamics (CFD), the integration of Generative AI and Large Language Model (LLM) agents presents significant opportunities for workflow automation and enhancing efficiency. By employing specialized approaches for storing and querying source data, such as vector databases and graph-based databases, overall precision can be further improved. This presentation explores the innovative application of Generative AI combining it with advanced data retrieval mechanisms from vector databases and graph-based databases to automate the CFD workflow. It specifically focuses on the use case of employing LLM agents for Java macro generation within the Simcenter STAR-CCM+ tool. By leveraging traditional Retrieval-Augmented Generation (RAG) techniques in conjunction with the GraphRAG technique, we can significantly enhance the accuracy of CFD workflow processes in a CFD tool. Extraction of sources into structured data plays an important role in this integration, providing a structured and formalized source of engineering knowledge. These structures capture and interlink diverse data points, enabling access to relevant information and facilitating the generation of Java macros, which stand behind a workflow process in a CFD tool. The combination of knowledge graphs with AI-driven techniques accelerates and brings more precision to the retrieval process.Our presentation will begin by outlining the fundamental concepts of the RAG technique, knowledge graph technology, and its application in the CFD workflow automation. We will demonstrate how AI, when fed with structured data, can rewrite deprecated code, generate, modify and extend Java macros for a CFD specific tool. This not only improves the efficiency of the workflow but also ensures that the generated macros are up-to-date with the latest releases. Additionally, we will showcase real-world examples, where these techniques have shown the promises.We will discuss the implications of this approach, focusing on the quality of data and the preprocessing stage, and its impact on the accuracy and reliability of the final result. Emphasizing the importance of data integrity, we will explore how ultracareful preprocessing can prevent errors and enhance the overall performance of the AI models. Finally, we will share feedback from users of our very first Copilot prototype, highlighting their experiences and the tangible benefits observed in their CFD workflows. This feedback will provide valuable insights into the practical applications and future potential of integrating GenAI with workflow automation processes.
BiographyMaria Bonner is Solution Architect for Semantic Web, AI-based and MBSE solutions at Siemens Digital Industries in Nuremberg. With a background in mathematics and computer science, Maria worked on various topics related to requirement-to-model, verification and simulation methods. Throughout her professional journey at Siemens, Maria contributed to the enhancement of Siemens products by incorporating AI-based methods into their features. Maria's educational achievements include a Diploma in Mathematics from the Novosibirsk State University in 2006, followed by a PhD in Computer Science from the University of Paderborn in 2011.
Presented By Jung Hun Choi (Hyundai Motor Company)
Authored By Jung Hun Choi (Hyundai Motor Company)Sung il Park (Hyundai Motor Company)
AbstractAlthough the rate of use of air suspension is increasing in terms of ride comfort, commercial trucks are still selling vehicles equipped with rigid axles and leaf spring suspension mainly due to high loading and rough road driving.Due to these structural characteristics, geometry errors inevitably occur according to vehicle driving loads, and it takes a lot of time to solve the braking pull problem due to geometry errors in the vehicle development stage. Therefore, this study studied a methodology that could predict geometry errors during vertical and braking loads of suspension steering system more reliably in the early stages of design and applied the methodology to develop vehicles to improve braking pull.FEM 3D leaf spring simulation has the advantage of being able to check deformation and stress for various loads, but it has the disadvantage of taking a lot of modeling and simulation time to respond to repetitive design changes. On the other hand, FEM 2D leaf spring analysis can quickly and reliably predict leaf spring behavior under vertical load and braking load required in the early design stage review, and it can be used together with design review for stiffness, strength, and durability performance, so it has an advantage when used in conjunction with performance review with the steering system in the early design stage. Therefore, this study compared and verified with 3D FEM analysis and system test measurement to determine a reliable 2D FEM modeling level and predicted the leaf spring behavior according to load conditions. FEM leaf modeling was performed from the 2D shape of each individual plate in the free camber state to the U-bolt tightening condition, so that the free camber state shape of the leaf spring assembly could be predicted, so that the behavior of the leaf spring assembly under vertical and braking loads could be predicted well even at the design stage before the leaf spring manufacturing. Based on this 2D model simulation, the geometry errors for various knuckle arm hp positions, draglink and pitman arm combinations were quickly and reliably reviewed. In addition, a kinematic model was generated using the basic library of open modelica to perform a kinematic review of the steering device, which was utilized to design a steering suspension system that minimizes braking drift in the early stage of vehicle design.
Authored & Presented By Florent Mathieu (EikoSim)
AbstractIn the aerospace industry, ensuring the reliability of simulation models is a critical step in the design and validation of complex structures. These models play a pivotal role in reducing development costs and time by minimizing the need for extensive physical testing, while also ensuring structural integrity and safety under extreme operational conditions. However, achieving high levels of confidence in simulation models can be challenging, especially when working with limited experimental data. This challenge is particularly relevant in projects such as the Dual Launch Structure (DLS) of the Ariane 6 launcher, where the ability to predict performance accurately is essential for operational success.This study explores the use of advanced validation techniques, focusing on integrating data fusion methodologies and Digital Image Correlation (DIC) technology. These approaches enable engineers to extract more information from experimental tests, align diverse datasets, and improve the accuracy of numerical simulations. By combining data from multiple sources—such as DIC measurements, strain gauges, and fiber optics—into a comprehensive validation framework, this methodology addresses the inherent limitations of traditional validation processes and enhances the credibility of simulation models.The Ariane 6 launcher project, led by ArianeGroup, presents an exemplary case study for these techniques. The DLS, a critical component of the launcher, is designed to accommodate dual payloads during launch, ensuring structural reliability under severe loading conditions. Its validation process involves a full-scale test that incorporates a variety of data sources to capture deformation, strain, and overall structural behavior. EikoSim contributed to this process by introducing a tailored Smart Testing framework, which integrates disparate data streams into a cohesive model. This framework supports efficient preprocessing and alignment of experimental and numerical data, ensuring compatibility and reducing potential sources of error.A key innovation in this validation process is the use of Digital Image Correlation (DIC). This optical measurement technique captures full-field strain and deformation by analyzing surface displacements in real time. Unlike traditional point-based sensors, such as strain gauges, DIC provides a more holistic view of structural responses, making it a valuable tool for understanding complex behaviors. During the DLS validation, three DIC systems were deployed in critical regions identified through Finite Element Analysis (FEA). These systems complemented traditional sensors, such as strain gauges and displacement transducers, to create a robust dataset for comparison with simulation predictions.Data fusion played a pivotal role in enhancing the validation process. By integrating data from multiple sources, the methodology leverages the complementary strengths of each measurement technique. For example, strain gauges provide high-precision local measurements, while DIC captures global structural responses, and fiber optic sensors offer continuous data along predefined paths. This multi-faceted approach enabled the identification of discrepancies between FEA predictions and experimental results, with most deviations remaining within acceptable limits of less than 20%. These differences were attributed to factors such as nominal boundary conditions, material property assumptions, and simplifications in the numerical model.The results of this study underscore the importance of robust validation frameworks in aerospace engineering. The combination of advanced data fusion and DIC techniques allowed for the creation of a highly accurate simulation model of the DLS, reducing the need for repeated physical tests and enabling the identification of areas for model refinement. This approach not only validated the DLS for operational use in Ariane 6 but also set a benchmark for future projects involving complex structural components.
Authored & Presented By Dominik Borsotto (SIDACT GmbH)
AbstractWith the rapid development of AI applications in the recent years the ever growing amount of simulation runs being performed has even more increased, especially with respect to provide simulations to train the AI models with. While current Simulation Data Management systems and the IT infrastructure already allow storing and accessing huge datasets and would facilitate putting this into action for analysis, the users usually only have tools and the time to make rather straight forward model to model comparisons, between current model versions and their immediate predecessors. Making use of the Principal Component Analysis, a dimension reduction algorithm out of the unsupervised learning techniques, a new database was developed and presented in the last NAFEMS World Conference. Continuously being fed with new simulation runs, this database enables us to automatically detect unknown behaviour in the most recent simulation runs compared to all predecessors at a time. To achieve this, the database does not only need to store and detect every new deformation pattern, but in addition several obstacles like a mapping between different simulation models, a performance efficient database format and a technique to also detect local effects had to be addressed. Taking a look at the special needs from engineers to also being able to include history curves into the analysis, the database now had to be extended to also being able to not only store and compress the curve data, but also use the outlier detection on the history data. Furthermore in case a deeper analysis of the curve anomaly is being needed it is shown how structural part deformation behaviour can be correlated against the curve information to derive not only curve to curve relationships, but also being able to compute part to curve correlations. Thus engineers become able to derive design suggestions which lead to an improvement in curve behaviour.In addition the search for deformation patterns had to be extended to also being able to search for similarities among time history data, to being able to identify simulations with a similar behaviour.
Authored & Presented By Ross Blair (Blow Moulding Technologies)
AbstractIntegrating the physical and digital worlds is key to driving sustainable packaging solutions. Using a structured ‘Measure, Digitise, Execute’ approach, BMT combines physical testing with digital prototyping to streamline the development of lightweight, high-performance packaging through stretch blow moulding (SBM), advancing sustainability while maintaining functionality and manufacturability.The process begins with Measure, where comprehensive data is acquired on the material, process, and product performance. Material data is collected using biaxial tensile testing and freeblow testing, enhanced by digital image correlation, to precisely capture the material’s behaviour under SBM conditions. Process data is gathered via in-line and off-line temperature and pressure measurement systems, while product performance data is obtained using industry-standard tests such as top load and burst tests, facilitated by BMT’s laboratory equipment. This multi-dimensional data forms the basis for model calibration and validation, ensuring high-fidelity simulations.In the Digitise phase, the collected data is used to calibrate representative material models for physics-based finite element simulations, integrating temperature and process data to ensure realistic performance predictions. Virtual prototyping is then carried out to explore opportunities for lightweighting, design innovation and sustainable material substitution. By combining simulation with design of experiments (DOE), data is generated across the design space, which is then used to train ML surrogate models. These models continuously learn and improve, enabling efficient exploration of the design space. Once well-trained, these surrogate models are employed to identify Pareto fronts, providing insights into trade-offs between key objectives such as lightweighting, structural integrity, and user experience.Finally, in the Execute phase, optimised designs are validated through physical prototyping and testing. BMT uses 3D-printed moulds to rapidly produce prototypes in-house, significantly reducing the time and cost associated with traditional mould production. This enables quick iteration and validation, with the entire process—from design ideation to physical validation—often completed within a week. This approach ensures that designs meet performance targets while minimising material usage and waste.BMT’s Measure, Digitise, Execute methodology provides a robust framework for accelerating the development of sustainable packaging solutions. By bridging the gap between the digital and physical worlds, BMT effectively addresses complex design challenges in the packaging industry, advancing sustainability while ensuring product quality and functionality.
BiographyRoss Blair is the Head of Simulation and Modelling at BMT, where he leads a team focused on advancing packaging design through simulation and data-driven approaches. He specialises in finite element analysis, machine learning, and computer programming. Ross holds a PhD in simulation and optimisation for the design of bioresorbable polymeric stents and a Masters in Mechanical Engineering from Queen's University Belfast. At BMT, he is instrumental in driving digital transformation and integrating AI to enhance packaging performance while prioritising sustainability and innovation.
Presented By Riccardo Testi (Piaggio & C. S.p.A.)
Authored By Riccardo Testi (Piaggio & C. S.p.A.)Michele Caggiano (Piaggio & amp amp amp C. SpA) Antonio Fricasse (Piaggio & amp amp amp C. SpA)
AbstractA CAE workflow was defined within a new Piaggio development methodology and executed to develop and assess the new Piaggio 48V electric powertrain’s performance. The workflow involved linked CFD, EMAG and structural analyses.The objective was to anticipate fundamental results and info and reduce the economic effort associated with the physical prototyping activities.Diverse CAE suites were coupled, optimizing using Piaggio’s procedures which were consolidated throughout the years for the of development of 2-wheelers equipped with internal combustion engines. The modular structure of those suites made in easier to incorporate the new EMAG analyses in the workflow.The MBS system simulation activities were carried out integrating the new E-powertrain models in a Piaggio’s database, which includes libraries of subsystems such as transmissions, testbenches, etc. This approach will allow for quicker generation of future models leveraging carry-out made possible by the modular nature of such a database.The whole CAE workflow relied on a common source of truth residing in Piaggio’s PLM system, allowing a smooth cooperation between Piaggio’s E-Mobility and Powertrain depts.EMAG simulations were carried to assess the electric machine’s performance and to provide input data for the subsequent CFD, MBS and structural analyses.The dynamic behavior, from a mechanical standpoint, was analyzed with multibody models, which produced KPIs’ values and provided input data for stress analyses.CFD analyses were used to verify that the exercise temperatures were compatible with the electric machine’s requirements and provided thermal maps for the FEM stress analyses.The structural integrity of the whole e-powertrain system was verified with combined stress and durability analyses, based on the working conditions identified during the previous EMAG, MBS and CFD campaigns.The structural FEM analyses were also used without coupling them with durability tools, to investigate functional aspects of the mechanical system.Being the CAE campaign carried out in the project’s early stages, it allowed to reduce the physical tests and could assist the sourcing activities managed by the Purchasing dept
Biography1992-1994: Fiat Research Center – CAE analyst 1994-1996: Fiat Auto – CAE analyst, standards drafter 1996-2004: Piaggio & C. Spa – R&D Department – CAE analyst 2004-2014: Piaggio & C. Spa – Engine Design Department – CAE analyst 2014-2016: Piaggio & C. Spa – Vehicle Integration Department – CAE manager 2016-present: Piaggio & C. Spa – Engine Design Department – CAE analyst
Presented By Romain Barbedienne (IRT-SystemX)
Authored By Romain Barbedienne (IRT-SystemX)Julien Silande (ESI-Group)Anthony Levillain (OP Mobility)Cedric Leclerc (Renault Group)Maxime Hayet (Stellantis)Henri Sohier (IRT-SystemX)
AbstractLimiting carbon footprints is a global issue that has a huge impact on companies. Particularly in Europe, and in the automotive sector, where the sale of new combustion engine vehicles will be banned from 2035. These constraints require industries to integrate their new technologies into products as quickly as possible. This involves shortening the product development cycle. To reduce these development cycles, one solution is to offload development to suppliers. In this way, customer-supplier relations will become increasingly present.This paper describes a tool-based process to help OEMs and suppliers to exchange simulation models. One of the challenges of this framework is that it must be able to adapt to companies with different vocabulary and operating modes. First, the difficulties were clarified in a workshop with experts from various OEM and supplier companies. This workshop showed that the main problems linked to model exchange between OEMs and suppliers come from the specification and credibility characterization phases of a simulation model. Existing solutions to solves these difficulties are a requirements list, defined by Nasa standard 7009B, the Predictive Capability Maturity Model (PCMM) or the Costa method. However, none of these methods is adapted to the most important criteria of the industry: the design phase, the expected maturity level of the model, and the expertise of the stakeholders.This paper will present two solutions to address each of these issues. These solutions were co-constructed with several simulation experts from companies that are either OEMs or suppliers.The first solution is a set of metadata (MIC core) to help with specification and a checklist composed of 24 requirements. Each requirement belongs to one of the five subsections: the clarity of the specification, the scope of the modeled system, the simulation environment, the model description, and the verification and validation procedure and criteria. These requirements are filtered according to three levels defined above, and filtered according to the MIC field that is filled.The second solution is a credibility assessment questionnaire composed of 21 questions. Responses have been designed to be as interpretable as possible. With, for example, concrete thresholds to be reached or specific actions. These questions are used to calculate a score for 6 distinct categories: model robustness and sensitivity, model uncertainty and margin, expert verification, expert qualitative validation, experimental validation and model use. The main feature of this questionnaire is that some questions are completed by the supplier, and others are completed by the OEM to ensure that the model is used in accordance with the specification. This score is compared with a threshold, which depends on the 3 criteria mentioned above.To assess the feasibility of implementing the approach, a demonstrator was created to support the approach. This article presents the application of the framework using a use case of integrating a fuel cell model into a thermal management model for an electric vehicle.The proposed solutions address the issue of OEM-supplier interaction by improving both the specification process and the credibility assessment of simulation models. Future work will focus on the use of credibility assessment in simulation architectures composed of different models.
Presented By Siddhartha Gautham A V (Siemens Digital Industries Software)
Authored By Siddhartha Gautham A V (Siemens Digital Industries Software)Benjamin Ganis (Siemens Digital Industries Software)
AbstractDue to the significant benefits being realized in time and cost to solution, the use of General Purpose Graphical Processing Units (GPGPUs) over traditional Central Processing Units (CPUs) is becoming ever greater in the industrial Computational Fluid Dynamics (CFD) world. As the processing architecture of a GPGPU is fundamentally different to a CPU, alongside arises a need to optimize and accelerate solver performance specifically on GPGPUs. While certain processes become less expensive as a result of GPU acceleration, certain other processes which were relatively quick on the CPU, may become dominant portions of the overall run time on GPU. For unsteady CFD problems involving rigid body motion, the costs of boundary interface computations and the maintenance of a dynamic framework for sliding meshes can oftentimes become one such performance bottleneck on GPGPUs.In this paper, Boundary Interface Caching (BIC) is presented as a method to accelerate overall solver performance by significantly cutting down on these over-head processing costs for sliding mesh / moving mesh CFD simulations. The fundamental methodology of boundary interface caching is firstly explained, following which the Simcenter STAR-CCM+ solver is used to demonstrate the benefits of boundary caching on two industrial use cases: (1) an external aerodynamics simulation of a sports car, and (2) an acoustic simulation of a HVAC (Heating, Ventilation, Air-conditioning) fan.Both simulations are first run without boundary interface caching to establish a baseline. They are subsequently run with boundary interface caching, and this is done on both CPU and GPU architecture. The resulting convergence behavior, accuracy of solution, and solver performance are presented, compared and contrasted in this paper.For STAR-CCM+ simulations with rigid body motion, it is demonstrated that our proprietary boundary caching algorithm provides significant improvements in overall solver time on GPGPUs, whilst maintaining similar levels of accuracy as the traditional CPU based simulations.
17:40
Authored & Presented By Fabio Santandrea (Volvo Car Corporation)
AbstractEnsuring the compliance to regulatory requirements is a mandatory process for many products to be allowed on the market. The assessment of product performance is largely based on physical testing of a few samples and, possibly, monitoring of the production process. In order to reduce cost and time-to-market associated to the certification process, manufacturing companies have increased their efforts to establish numerical simulations as a legitimate alternative to physical testing, thus introducing the notion of “Certification by Analysis” (CbA). In some sectors, certification bodies responded to the industrial drive towards virtual testing by developing guidelines and standardised reporting documents to streamline the credibility assessment of the results of numerical simulations without compromising the safety of the certification decision. However, there are still significant differences in the acceptance of CbA, and the maturity of its practical implementation among different industrial sectors.In this contribution, a review of existing examples of CbA is presented, together with the preliminary study of a potential new case. The role of standards in the specification of product requirements and assessment methods (for physical as well as virtual testing) will be considered, drawing on the work done in the research project STEERING funded by the Swedish Innovation Agency (VINNOVA). The review will focus on the identification of similarities and differences in requirements, methodologies, and challenges faced by manufacturers and certification bodies. The analysis of established cases provides the starting point to investigate the role of CbA in applications where product certification currently relies fully on physical testing. The feasibility of CbA will be studied in the assessment of crashworthiness requirements for a component made of fibre-reinforced polymer composite material. This preliminary study is developed within the COST Action HISTRATE, a European network of academic researchers and industrial stakeholders that aims at establishing the scientific foundation of a reliable framework for CbA of composite structures subjected to high-strain loads.
Presented By Subham Sett (Hexagon)
Authored By Subham Sett (Hexagon)Curtis (Brody) Kendall (Northwestern University)
AbstractFinite element analysis (FEA) has been a pillar of computer-aided engineering (CAE) since the 1960s. The longevity of this simulation technology presen¬ts both opportunities as well as unique challenges for leveraging generative AI for both new and experienced users. While there are mushrooming large language model (LLM) based assistants available for such users, many of the assistants operate at a superficial level based on training documents and manuals; they are not sufficient for supporting a user intent on understanding, debugging and quickly resolving issues that lie at the input file level, the gateway to the FEA solver.In this context, there are three unique challenges to leveraging generative AI. First, the inputs to FEA solvers are text-based with syntaxes, definitions and descriptions exposed through keywords that follow a non-intuitive, unique taxonomy and can be documented in manuals that can have thousands of pages. Next, these input file formats never adapted beyond the original implementation intended for punch cards which allow input to be unsorted but create the burden of ID management. This approach is in direct conflict with modern input decks that are geared towards HPC and can easily be gigabytes in file size. The sorting of such a file could be specific to a pre-processor, company best practices or simply the historical build-up of a model. The unsorted nature of this directly contracts with the natural flow of human language.In this work, we present a unique approach to solving the problem by establishing a graph representation of the input file, establishing edges based on cross-referencing ID schemes and traversing this graph. This approach enables input-file specific document retrieval and provides users of every skill-level the ability to obtain prompts and responses at a higher level (pure documentation) and deeper level (input file). Future ongoing work explores optimizing the developed algorithms that reduce token consumptions and help users leverage their compute resources in a more cost-effective manner.In this proof-of-concept work, we combine a parser, graph, a graph traversal method, and documentation retrieval (based on the graph) to deliver an in-context Generative AI experience relative to the exact features of an input file a user is interrogating. In ongoing trials, we expect reduced onboarding times, reduced debugging times, and reduced touches of traditional documentation.
Presented By Jörg Straub (Institute for Material Systems Technology Thurgau - WITG)
Authored By Jörg Straub (Institute for Material Systems Technology Thurgau - WITG)Lazar Boskovic (Institute for Material Systems Technology Thurgau - WITG) Alex Eckhardt (Institute for Material Systems Technology Thurgau - WITG) Urs Dornbierer (Geobrugg AG) Pascal Bernhard (Geobrugg AG)
AbstractAs fish farming is becoming increasingly important worldwide, ongoing investigations aim to numerically calculate and experimentally analyse highly stressed wire connections, such as those found in offshore fish farming cages, to provide a basis for more accurate lifetime estimation.The challenge is to recognise possible wire breaks that can occur due to different environmental conditions during the product life cycle of a cage and to be able to derive suggestions that increase the system service life. This is done on the one hand for financial reasons and on the other to prevent the mixing of farmed and wild fish through escapes. In order to improve the possibility of understanding the behaviour of high-strength stainless steel wires from which the cages are made, structural calculation models are being set up, which will ultimately be compared with the experimental behaviour of the structures in the laboratory and in operation.Test series of quasi-static tensile tests performed on both straight wire specimens and the smallest part of a wire mesh (referred to as One-link) provide data for the design and validation of a finite element model in the context of a non-linear structural analysis.At the same time, in order to optimize the calculation efficiency, a possibility is sought to dispense with 3D modelling of the entire fish farm. Here, the approach of a spring stiffness substitute model is used, with which it is possible to represent the wire mesh periphery around one or more interconnected One-links (3x5 or more).Clamping devices specially developed for the experiments are used for the testing purpose, with which wire connections (One-Link and a Mesh segment of 3x5 connected One-Links) can be stressed in a defined way for a consistent generation of comparable data, which is then compared with the simulation results. These results and findings are used as adaption of the calculation models and form the basis for further simulations. In addition to the static strength investigations and the assessments of other effects that may occur during operation, fatigue tests (up to 2 million load cycles) provide new insights, allowing the prediction of potential fatigue damage due to operational loads such as waves or water flow.With the results of fluid-structure interaction calculations, the loads on a fish farm cage depending on the flow conditions can be determined, conclusions can ultimately be drawn about the site-specific service life.All tests and simulations are conducted without considering the corrosive effects of the water in which fish farming is carried out. Previous studies on stainless steels and their behaviour in corrosive environments are linked to this investigation to make lifetime predictions even more accurate.
Presented By Randy Bailey (DJH Engineering Center)
Authored By Randy Bailey (DJH Engineering Center)Ivan Ihlar (DJH Engineering Center)
AbstractIn finite element analysis and simulation, correlating test data (gage data, acceleration data, etc.) to load cases used in simulations is crucial for developing reliable models that accurately represent real-world conditions. This process involves examining the data , model calibration techniques, and reliability of the data and analysis model. By converting strain gage data into meaningful load cases, engineers can better simulate real-world conditions, leading to improved predictive models and design validation.Processing of statistical data and applying that data to load case development is the first step to having an accurate model that matches real life. In this process, test data is applied to real-world scenarios using various data manipulation techniques, simulation tools, and validation processes. These tools can be used in tandem to determine the load cases and operations that are most damaging. Understanding and calculating these damaging operations is a critical aspect of engineering analysis. Finding where the damage is generated during operations means identification and analysis of load cases that lead to material fatigue, structural failure, and other forms of damage. By leveraging strain gage data to calculate where the damage is coming from, engineers can develop more accurate models and focus on the largest contributors, rather then just running all the load cases, to predict and mitigate these damaging effects, improving the durability and safety of designs. This data can also be applied in real world problems so that updates are made focusing on the source of the issue rather than trying to fix a symptom. Integration of gage data and test data into engineering simulations is essential to bridge theoretical models with real-world applications. This presentation will demonstrate the benefit of this process and improvements in design performance using several case studies. These real-world applications demonstrate the practical benefits of accurate test correlation and load case development. They provide insight into best practices and approaches to applying strain gage data to projects. Using these methods, the accuracy of iterations is improved, accelerating the design process, and reducing the need for additional testing. With a comprehensive overview of test correlation and load case development, the identification of damaging operations, and the application of this data to real-world scenarios can take analysis beyond just analysis but starts to simulate real world solutions. By bridging the gap between theoretical analysis and practical application, engineers can enhance the reliability and performance of their designs, leading to safer and more efficient solutions.
Presented By Niranjan Ballal (Fraunhofer High-Speed Dynamics, EMI)
Authored By Niranjan Ballal (Fraunhofer High-Speed Dynamics, EMI)Thomas Soot (Fraunhofer-Institut fr Kurzzeitdynamik)Michael Dlugosch (Fraunhofer-Institut fr Kurzzeitdynamik)
AbstractMachine learning (ML) is emerging as a key tool for predicting crash dynamics in near real-time, driving significant advancements in automotive safety engineering. In this domain, high-fidelity synthetic data, such as finite element (FE) crash simulation data, plays a pivotal role. Real-world crash data often presents challenges such as limited availability, noise, and incomplete information, making it difficult to train robust ML models. By contrast, simulation data provides detailed insights into the underlying physical phenomena, offering a controlled environment to generate diverse datasets. This makes simulation data an indispensable resource for the development of predictive models that aim to improve vehicle safety and occupant protection.Traditional ML approaches in this field often rely on scenario-level input parameters, such as impact velocity, and collision angles, to predict outcomes like intrusion or injury levels. While these approaches are effective to a degree, they frequently fall short of leveraging the rich, granular information embedded within simulation data. This limitation can result in suboptimal predictive accuracy, particularly when dealing with complex crash dynamics involving multiple interacting factors. A domain knowledge-guided methodology is introduced to address this limitation, segmenting the problem domain into smaller, homogeneous sub-domains based on specific physical phenomena. By reducing the required ML model complexity within each sub-domain, tailored ML models are developed to optimize predictions for specific crash dynamics, enhancing both data efficiency and prediction model accuracy. This approach leverages hierarchical model trees informed by domain expertise, ensuring that each sub-domain's unique traits are effectively captured without relying on a singular, overly complex model.Embeding domain knowledge through segmentation not only allows for the customization of sub-model architecutres but also facilitates adaptive data generation. The proposed methodology improves predictive performance and reduces training sample requirements while preserving fidelity. Comparative evaluations demonstrate superior accuracy and robustness in capturing nuanced relationships within crash simulation data, positioning this approach as a significant step forward in the application of ML to safety-critical domains.
Authored & Presented By Christoph Angermann (Scherdel Siment)
AbstractThe fatigue strength of spring-hard components is significantly influenced by the manufacturing process. According to the FKM guideline for springs, it is possible to evaluate the fatigue strength of such components, whereby manufacturing process parameters such as shot peening, heat treatment and the residual stresses induced by the manufacturing process play a crucial role. Residual stresses that are induced during the manufacturing process have a direct effect on the stress distribution and thus on the fatigue life of the components. However, the exact determination of these residual stresses is associated with considerable challenges, as detailed and time consuming production simulations are usually required for this. In order to reduce this effort, an innovative approach was developed that efficiently and precisely determines the process-induced residual stresses. This method is based on performing a large number of non-linear calculations in order to generate a broad database of residual stresses in round wires after the coiling process. Based on this data, a convolutional neural network was trained that can precisely predict the stress profiles in the wire cross-section after coiling. The input parameters used in the computations include the spring geometry and the material class of the spring-hard materials. The output of this surrogate model is the stress state in the wire cross- section and thus provides a reliable basis for taking the residual stresses into account in the fatigue prediction. A significant benefit of this method is the possibility of combining this process step with other manufacturing processes, such as shot peening and heat treatment, and also taking their influence on the residual stress distribution into account. This creates a complete view of the process-induced residual stresses along the entire production chain. The method presented allows a more efficient and comprehensive consideration of residual stresses when determining the fatigue strength of spring-hard components. This not only improves the accuracy of fatigue strength evaluations, but also enables the manufacturing processes to be optimized with regard to the fatigue strength and durability of the components. This approach therefore represents a significant advance in the field of fatigue strength analysis and optimization.
Presented By Svetlana Jeronimo (Dassault Systèmes Deutschland GmbH)
Authored By Svetlana Jeronimo (Dassault Systèmes Deutschland GmbH)Faron Hesse (Dassault Systemes)
AbstractAlthough just one component associated with a vehicle that has thousands of parts, headlights play a crucial role in driver and passenger safety. Specifically, both rear and front headlights illuminate a vehicle whether it be day or night, thereby allowing other road vehicles to perceive the vehicle in question. In summertime, the main source of headlight ray obstruction is bugs or insects that accumulate on the headlight surface, particularly when travelling at high speeds. However, during the wintertime, the main cause of light ray obstruction occurs prior to travelling, when the vehicle is still warming-up. In this phase, often ice that has accrued on the headlight’s cover while the vehicle is at rest needs to be melted away via heat emitted from a wire filament embedded in the Makrolon polycarbonate cover of the headlight. When the driver and/or passengers are in a hurry, it is imperative that the defrost of the headlight ice layers occurs as rapidly as possible. With the increasingly more powerful hardware and software capabilities on the market, it is now possible to optimize the headlight defrost scenario, where the hot wire filament melts away the ice layer on the headlight’s front, for reduced time entirely virtually. The present work uses modelling and simulation (MODSIM) on Dassault Systèmes’ unifying computer-aided engineering (CAE) software platform, where both computer-aided design (CAD) and Navier-Stokes based computational fluid dynamics (CFD) tools are combined, to optimize this defrost time. A key aspect of the current workflow is the H2O phase change that is modelled by implementing a temperature dependent specific heat capacity with a spike at 0°C to account for H2O’s heat of fusion. Furthermore, the MODSIM process allows the simulation scenario set-up of a parameterized CAD model to be automatically updated when the geometry in question is changed. For the headlight geometry in question, which is provided by a Tier I automotive component supplier called Weldex, the headlight filament wiring is parametrized and optimized in shape for minimized defrost time using a parametric design study approach. The constraint is that the headlight geometry itself cannot be changed, as this is dictated by the automotive original equipment manufacturer (OEM) that has purchased the headlight.
BiographySvetlana Jeronimo holds an MSc in Computational Engineering from Ruhr University Bochum, where she specialized in computational fluid dynamics, finite element analysis, and material modeling. She began her professional career in 2018 at Exa Corporation, working on automation of aerodynamic and thermal applications in the automotive industry. In 2020, Svetlana joined Dassault Systemes as an Industry Process Consultant in SIMULIA Fluids division. Since 2023, she has been leading a SIMULIA Fluids team, supporting customers and developing processes in aerodynamics, aeroacoustics, and thermal management.
Presented By Jinjiang Li (University of Manchester)
Authored By Jinjiang Li (University of Manchester)Robin Laurence (University of Manchester) Oliver Woolland (University of Manchester) Zeyuan Miao (University of Manchester) Matthew Roy (University of Manchester) Lee Margetts (University of Manchester)
AbstractIn this paper, the authors describe the design, implementation and testing of a prototype digital twin of a manufacturing cell that can be used for wire arc additive manufacture. We have used Nvidia’s Omniverse as the core platform due to its open interface, which facilitates easy integration with open source and proprietary CAD, CAM and CAE tools. The digital twin breaks down the functional silos between design, manufacture, operations and maintenance, providing a full digital thread of the processes and a digital passport for the component. The implementation collects, stores and links all data associated with a manufactured part. Using the manufacture of a fusion power plant component as a case study, interaction with the digital twin starts with the engineer creating the original CAD model. Once complete, the CAD model is processed (by software) to prepare instructions for a robot arm to build the part. The manufacturing cell is equipped with various sensors and cameras. All the instrument data is streamed to the digital twin as the build progresses. To facilitate a loop back from manufacturing to physics-based modelling, a layer-by-layer build geometry is digitised using a laser scanner mounted to a second robot arm. The geometry is passed to a proprietary finite element package for meshing and residual stress analysis. All data captured by the twin is processed and documented, employing automation where possible, reducing the need for human involvement. After installation in the fusion power plant, the complete digital record can be retrieved at any time during the lifetime of the component. If the power plant is instrumented, the operating conditions can also be streamed to the digital twin, informing maintenance schedules. The digital twin links the traditional CAD/CAE design process with the actual cradle to grave experience of the component, allowing engineers to re-evaluate assumptions made at the design concept stage, as well as providing the opportunity for lessons learned, informing design for future generations of power plant.
Presented By Sam Zakrzewski (Rescale)
Authored By Sam Zakrzewski (Rescale)Romain Klein (Rescale)
AbstractThe demand for high-performance computing (HPC) is expanding as scientific and engineering challenges grow increasingly complex. Traditional CPU-based architectures, while versatile, often struggle to efficiently handle specialised workloads such as machine learning, computational fluid dynamics (CFD), and molecular dynamics. To address this, the integration of domain-specific hardware accelerators like NVIDIA GPUs and ARM chips has emerged as a game-changer, enabling unparalleled performance and efficiency for targeted applications.This paper delves into the role of domain-specific hardware accelerators in revolutionising scientific workflows. We focus on how specialised architectures, available on Rescale’s intelligent cloud HPC platform, empower researchers and engineers to leverage cutting-edge hardware tailored to their workloads. By combining NVIDIA GPUs for compute-intensive tasks with ARM-based architectures for energy-efficient operations, users can achieve optimal performance while addressing cost and sustainability goals.A critical aspect of this discussion involves the optimisation of workflows for hybrid and heterogeneous computing environments. Integrating domain-specific accelerators requires not only hardware availability but also seamless software orchestration to manage data flows, scheduling, and execution. The platform addresses these challenges by offering a unified environment where users can dynamically select the most suitable hardware configurations based on workload requirements. This flexibility is particularly impactful for industries ranging from aerospace to pharmaceuticals, where precision and efficiency are paramount.The paper also explores practical use cases, such as leveraging NVIDIA GPUs for AI-driven simulation post-processing or ARM-based chips for low-power, high-throughput scenarios. These examples illustrate how hardware accelerators can drastically reduce time-to-solution and cost, enabling organisations to push the boundaries of innovation. Additionally, we will discuss the importance of workload profiling and benchmarking to ensure optimal hardware utilisation, drawing on real-world insights.Attendees will gain an understanding of the technical and operational considerations involved in adopting domain-specific hardware, from software compatibility to deployment strategies in cloud-based environments. We will highlight how to simplify these complexities, allowing users to focus on their core scientific and engineering objectives.Through this exploration of domain-specific accelerators, we aim to demonstrate how specialised architectures are not just advancing performance but also reshaping how organisations approach HPC, paving the way for new breakthroughs in research and industry.
18:00
Authored & Presented By Oleg Ishchuk (SDC Verifier)
AbstractFloating offshore wind platforms are subjected to complex cyclic loading caused by waves, wind, and currents, creating significant fatigue risks in critical welded joints. These joints are primary stress concentrators and require precise evaluation to ensure structural integrity and compliance with offshore standards such as DNV RP-C203. This study focuses on the fatigue performance of these structures, providing a detailed approach to fatigue life assessment and structural optimization under harsh marine conditions. The analysis begins with environmental load modeling, incorporating parameters such as wave heights, wind speeds, and current forces. These factors are used to simulate the six-degree-of-freedom motion of the platform, including surge, sway, heave, roll, pitch, and yaw, which induce complex multi-axial stress states. Welded joints, representing high-risk failure points, are analyzed using S-N curves and hot-spot stress methodologies to evaluate cumulative fatigue damage over time. Managing the extensive datasets generated by offshore loading scenarios is a critical challenge. This study employs advanced filtering techniques and rainflow cycle counting to identify significant stress cycles while minimizing computational effort. These methods enable the isolation of high-impact stress ranges, focusing analytical resources on areas where fatigue damage is most likely to occur. Based on the results, structural refinements are proposed, including adjustments to weld throat thickness, material properties, and joint configurations. These changes aim to improve fatigue resistance and ensure a service life of 25 years under demanding operational conditions. The findings demonstrate that these refinements not only meet DNV RP-C203 compliance requirements but also extend the lifespan of critical components by addressing localized stress concentrations. This paper provides a robust methodology for fatigue analysis and structural optimization, offering practical insights for improving the reliability and safety of floating offshore wind platforms. The approach highlights the importance of data-driven engineering decisions in the development of durable and efficient renewable energy systems.
Presented By Alain Tramecon (ESI Group)
Authored By Alain Tramecon (ESI Group)Lars Aschenbrenner (Volkswagen AG)
AbstractReduced Order Modelling (ROM) can be used to improve the accuracy of CAE models while shortening numerical parameter calibration. Anindustrial example for airbag deployment case illustrates the value of AI/ROM technology applied to CAE. Today extensive and time-consuming iterations are needed for the calibration of airbag model parameters such as outflow discharge coefficient, inflator heat loss, which may not be measured precisely by tests. This impacts validation quality and delivery timing of airbag models for the synthesis car crash simulations. The choice of relevant airbag model parameter exploration range for validation is based on experience and trial & error approach and is limited by the computational cost of high-fidelity CFD coupled Finite Element simulation runs. ROM based methodology reduces the airbag validation time by testing thousands of parameter combinations in a time frame of days instead of weeks. Therefore, model quality can be improved as more combinations can be tested using Reduced Order Modeling than within Finite Elements standard approach. The capability of ROM to achieve this target is shown on an industrial airbag calibration study. The available ROM methods using Proper Generalized Decomposition (PGD) are explained as well as the choice of DOE (Design Of Experiments), together with the number of Finite Element simulations required for training the ROM model. The ROM results are then compared to the Finite Element simulations, for parameters outside the training set, and a good match is demonstrated. This shows that the parametric ROM model can be used for the calibration study. A series of linear impactor experimental tests has been conducted, by changing the airbag vent size, impactor mass and velocity. The impactor acceleration, displacement and airbag pressure time history curves obtained by the ROM model are compared to the experimental results for each set of parameters using ISO Score (CORA) ratings. The process for finding the best parameters sets among the more than 1000 combinations is fully automated and takes less than one hour. A final validation using a standard Finite Element simulation with the updated parameters is conducted and the results are compared and rated with each experimental test, including the above-mentioned time history curves and the airbag deployment kinematics.
Authored & Presented By Eunju Park (GNS Systems)
AbstractDesigning sheet metal components is a challenging and specialized task that demands a deep understanding of engineering principles and extensive industrial experience. Traditionally, this process has heavily relied on heuristic knowledge and practical expertise acquired over many years. While this approach has been effective, it is inherently time-consuming and prone to human error, limiting the efficiency and accuracy of the design process.With the development of artificial intelligence (AI), a significant transformation is underway in the design and optimization of sheet metal components. AI methods, renowned for their exceptional performance, are now being integrated into the design process to streamline operations and improve outcomes. These methods aim to simplify the inherently complex design tasks, reduce reliance on manual expertise, and significantly shorten the time required to develop and refine designs. This shift not only enhances the overall efficiency of the design process but also improves accuracy and innovation.Among the most promising AI methods in this context are Artificial Neural Network (ANN), particularly Multi-Layer Perceptron (MLP). MLP is especially effective in addressing engineering design challenges and minimizing errors in experimental data. It is well-suited for optimizing design parameters and making predictions based on datasets, which would be too time-consuming with traditional simulation methods. MLP can significantly reduce the time spent on simulations by learning from existing data and providing faster, more accurate predictions. The objective of our research is to develop an integrated methodology that combines forming simulation with MLP to approximate design parameter functions and evaluate design performance, ultimately enabling the identification of optimal designs. In this methodology, forming simulations are initially employed to generate training data for the MLP. The well-trained MLP is then used to predict the performance of different designs. This methodology not only accelerates the design process but also provides a reliable means of exploring design variations and assessing their effectiveness.To ensure the reliability of the developed MLP, its performance is compared with other machine learning and ANN methods. The results clearly demonstrate that the proposed methodology is highly effective, excelling not only in predicting and evaluating designs but also in estimating various design variations. This integrated approach offers a robust and efficient solution for optimizing sheet metal component design, setting a benchmark for future advancements in the field.
Biography- Eunju Park completed her studies with Master of Bachelor of Science for Industrial & Information Systems Engineering and Master of Science (M.Sc.) in Data Analytics. - Expensive experience with data-driven solutions and digitalization in solving business problems across various industries. - Awarded the Top Paper Award at REHVA 14th HVAC World Congress for her research in a knowledge graph-based approach to data digitalization and analysis in 2022 - Working at GNS Systems since 2024
Presented By Konstantinos Rachoutis (BETA CAE Systems)
Authored By Konstantinos Rachoutis (BETA CAE Systems)Dimitrios Drougkas (BETA CAE Systems)
AbstractIn the lifetime of a vehicle, ease of use and quality of appearance is as an important a goal as the longevity of the vehicle: engineers do not only need to manufacture a car whose various mechanisms will remain functional without defects after continuous use, but they must also ensure that the users will be able to operate it comfortably. This study aims to optimize the design and manufacturing of the tailgate component of a vehicle on two fronts: manufacturing quality and user comfort. The process involves the odification of the gas lifter components positions in order to perform Multi-Body Dynamic simulations followed by durability analyses. The goal is to maintain the deformations of the tailgate component at reduced levels resulting in optimum external appearance regarding panel gaps, as well as comfortable user operation. Machine Learning predictive models (also refereed as predictors) are employed in order to accelerate the product design and evaluation process. Engineers can explore various what-if scenarios and extract the necessary key responses for each modification applied to the vehicle, to estimate its improved performance and usability, without sacrificing the design time. At the same time Machine Learning predictors are employed in Optimization studies, replacing the FE (Finite Element) Solver, in order to reach the optimum design in an automated and faster way, thus, improving the product development time. In this study three optimization approaches are presented, utilizing machine learning methods that predict simulation results for two different analyses. Compared to the established "Direct" optimization method (design updates, FE analysis, post processing), the Machine Learning assisted Optimization methods significantly reduced the optimization time while maintaining similar levels of accuracy. This allowed for more optimizations studies resulting in reduced product development time and increased product performance.
Presented By Felix Pause (dive solutions)
Authored By Felix Pause (dive solutions)Filippo Boscolo Fiore (Dive Solutions GmbH) Daniel Derrix (BMW AG) Ian Pegler (NVIDIA)
AbstractIn today’s competitive automotive landscape, accelerating time to market and optimizing cost efficiency are critical. BMW Group and Dive CAE have examined how advancements in computational fluid dynamics (CFD) can address these challenges, focusing on GPU acceleration, cloud parallelization and Smoothed Particle Hydrodynamics (SPH).The study examined drivetrain design projects, particularly the development of new differential systems. Modern differential systems are becoming increasingly complex, posing significant challenges for design engineers. Additionally, evolving safety, environmental, and performance standards demand iterative redesigns and extensive testing, lengthening development cycles. Several operating points and designs were compared and assessed with respect to oil churning losses and comprehensive oil coverage of system components.The SPH method is particularly effective for modelling problems of this type, involving free-surface or multi-phase flows. Moreover, unlike grid-based methods, its Lagrangian, particle-based framework naturally handles complex geometries and moving components without requiring re-meshing. Additionally, it reduces manual pre-processing work, paving the way for automation of large parallel simulation studies.Dive CAE employs a Weakly Compressible SPH approach (WCSPH), incorporating a variety of measures relevant to industry-level accuracy and usability. Key methods include a semi-analytical integral boundary condition to improve near-wall flow accuracy. This paper outlines the theoretical foundations of the method and provides selected validation results.GPU acceleration of the SPH code demonstrates a runtime reduction by a factor of 5-18 compared to CPU architectures. Cloud parallelization enabled concurrent testing of 12 operating conditions, shortening project turnaround time (TAT) by a factor of 5 compared to an on-premise setup. Eventually, the paper also includes an analysis of the cost effect of migrating the simulations to GPUs and the cloud.In conclusion, the study examines how GPU acceleration, cloud technologies and SPH contribute to overarching goals of accelerating time to market and reducing costs.
Authored & Presented By Tyler London (Reckitt Benckiser Health Care UK)
DescriptionReckitt is home to some of the world's best-loved and most trusted brands such as Lysol, Durex, Gaviscon and Finish. With a goal to protect, heal, and nurture in the relentless pursuit of a cleaner and healthier world, Reckitt is delivering an ambitious sustainability agenda and pursuing opportunities continuously innovate products.Underpinning this innovation is a digital transformation that is leveraging advanced simulation techniques and a “Digital First” approach to R&D. This presentation will cover both the strategy and the technology for delivering this capability at scale. To drive productivity and speed-of-innovation, “virtual labs” are being created where scientists can pursue characterization, scale-up and optimization virtually before undertaking physical experiments. To facilitate this growth in simulation demand, the associated change management related to democratization, upskilling, and compute resource management will be covered. On the technology side, end-to-end applications of finite element analysis (FEA), computational fluid dynamics (CFD), discrete element methods (DEM) and molecular dynamics (MD) and their ability to drive theconsumer-centric product design and sustainability will be explored.
BiographyTyler London is the Senior Product Manager for Modelling, Simulation and Visualisation at Reckitt, a global leader in consumer health, hygiene, and nutrition known for brands such as Dettol, Durex, Gaviscon, and Finish. In this role, he is responsible for defining and executing the vision of in-silico capabilities such as Computational Fluid Dynamics, Finite Element Analysis, Discrete Element Methods and Molecular Dynamics. Prior to joining Reckitt, he was the Head of the Numerical Modelling department and a Technology Fellow in computational engineering at TWI, an international R&D and consultancy organisation, for 12 years. He joined TWI after obtaining a BSc in Mathematics at Tufts University and an MSc in Mathematical Modelling and Scientific Computing at the University of Oxford. Tyler is also an active participant in the NAFEMS Working Groups for manufacturing process simulations.
Authored & Presented By Harri Kovisto (Ceres Power Limited)
BiographyHarri Koivisto leads the modelling and digitalisation department at Ceres, responsible for multi-domain modelling of solid oxide platform technologies in a wide range of scales and physics domains. His department is also responsible for developing and maintaining a robust cloud data platform and creating bespoke data product solutions. The mission of the team is to accelerate the pace of product development and enable fast data driven business decisions. Harri has extensive experience in modelling, data analysis, and working with several partners worldwide. He enjoys looking for the hardest engineering challenges to solve and making a positive difference to the world. His current problem is an “8 billion people problem” of helping to decarbonize the world at scale and pace. Prior to joining Ceres, Harri had a career in academia at the University of Sussex, designing and teaching mechanical engineering classes to students for a decade, with multiple award-winning outcomes. His PhD was on developing experimental high resolution heat transfer measurement techniques in turbomachinery.
10:20
Authored & Presented By Andy Richardson (PHRONESIM LTD)
AbstractEngineering Simulation is more critical than ever for any organisation engineering and delivering products in today’s highly competitive, complex, and technologically innovative world. Business Leaders need to be able to trust their simulation results to enable them to make product decisions and business commitments with confidence.And yet, simulation for today’s products is highly complex, and the accuracy and reliability of the results generated depends on many factors. Given it is so critical, it is really important that organisations have confidence in their simulation. To do this they need to pay attention to the essential elements that make up every simulation capability: • Efficient processes that define the simulation workflows and aligned to the overall development processes. • Capable and effective methods that define how to model the specific physics required to deliver the product requirements. • Capable and connected tools to model the correct physics accurately. • Representative and accurate models that reflect the latest design intent • Reliable and accessible technical data to define material properties, technical specifications, modelling parameters, and use cases. • Skilled and experienced people with product knowledge and experience of the tools and methods, organised effectively to maximise collaboration and efficiency. • Sufficient, reliable and flexible computing infrastructure and resources to execute the complex and large scale simulations.These are the essential elements of a simulation capability, but before identifying actions there is some preparatory work to do. Organisations need to start by reviewing their product and business goals and identifying their expectations for simulation. They need to identify the direct and indirect stakeholders involved in simulation. And they need to identify the customer, product, manufacturing, and business requirements that the simulation team need to deliver, the to-do list!Having these foundations in place I believe these are the 7 most important practical actions that every organisation should take: 1. People - Identify your simulation stars and build a collaborative simulation community. 2. Processes - Review your simulation processes to check they define workflows efficiently, and align well to your product delivery process. 3. Methods – Assess how well your methods cover your requirements, and ensure you know the confidence the team has in their Methods? 4. Tools - Check your tool landscape, especially; gaps, duplicates, tool utilisation, tool chain connectivity and licence models. 5. Models – Create a Modelling Plan. Identify which models are needed for what, when, and with what fidelity for a given project. 6. Data - Check your input data needs, sources, availability and maturity. 7. Computing - Check your job submission process. Is capacity & performance a constraint? These 7 actions will give a good insight into the health of your simulation capability, and highlight the strengths and weaknesses in the organisation, enable improvement actions to be prioritised, and provide a great starting point to build a simulation strategy.In this presentation I will explain these 7 key actions, and outline how they provide a an important foundation to take the next steps to building a simulation strategy to achieve business goals.
BiographyAndy Richardson is Founder and Director at PHRONESIM Ltd, a company providing independent advice to organisations worldwide to help them maximise the efficiency and effectiveness of their engineering simulation. He is a Chartered Engineer and Fellow of the Institution of Mechanical Engineers. He has 30 years’ experience at Jaguar Land Rover with 20 years in Engineering Senior Management roles including 10 years as Head of Simulation. Andy also spent 2 years in Aerospace as Senior Manager for Airframe Methods and Tools at Airbus. Andy delivers the NAFEMS eLearning course ‘How to Implement a Simulation Strategy’. He is a member of the NAFEMS UK Steering Committee, Assess Business Theme, and Simulation Governance and Management Working Group. Andy holds a BSc in Engineering (Coventry University), an MSc in Numerical Modelling (Aston University) and an MBA (Warwick University Business School).
Presented By Libin Mao (Ostfalia Hochschule f. angew. Wissenschaften)
Authored By Libin Mao (Ostfalia Hochschule f. angew. Wissenschaften)Martin Strube (Ostfalia, University of Applied Sciences) Mathew Mathew (Ostfalia, University of Applied Sciences) Felix Schneider (Ostfalia, University of Applied Sciences) Martin Mueller (Ostfalia, University of Applied Sciences)
AbstractOptimisations in the product development process are very computationally and time-intensive and do not always generate the maximum possible potential. In order to improve and methodically analyse this process, automatable CAx process chains are to be linked with AI agents in order to improve the optimisation result and to accelerate the development process by reducing simulation time.For the training process of the AI agents, a CAx process chain is used as an environment that includes a parametric-associative coupling of CAD and CAE in order to realise automated and update-stable iterations. The current object of investigation for the AI agents is the use of Deep Reinforcement Learning (DRL). In particular, the following DRL-approaches were used in the research project and examined for their usability in terms of time expenditure and optimisation quality:• DQN + Rainbow (Deep Q-Network)• PPO (Proximal Policy Gradient)• DDPG (Deep Deterministic Policy Gradient)• TD3 (Twin Delayed DDPG)• SAC (Soft Actor Critic)However, there are a number of challenges in training AI agents for the optimisation process, which will be discussed in the presentation. AI-based optimisations are therefore in direct competition with traditional optimisations in order to justify the greater effort required for the necessary training processes. A decisive hurdle is the data generation for the training process, which is carried out by the numerical simulation as part of the CAx process chain, which is very time-consuming. For this reason, various options such as design of experiments, simplification of the simulation models, use of surrogate models and optimisation of the workflow were investigated in order to reduce simulation times, which makes the training processes more efficient. As well-known the definition of reward functions have a major influence on the convergence of a training process. In this case, the particular challenge lies in the fact that the trained agents are to be applied to different components. With this in mind, the reward functions must be designed using generic parameters. This offers the advantage that trained AI agents can be used as a tool for similar problems (transfer learning). The findings on the challenges mentioned will be discussed in the presentation. The AI-based optimisations will be presented using several structural-mechanical examples from automotive engineering.
Presented By Vito Murgida (Hitachi Energy)
Authored By Vito Murgida (Hitachi Energy)Sauro Vannicola (Hitachi Energy Ltd.) Luigi De Mercato (Hitachi Energy Ltd.) Tomasz Nowak (Hitachi Energy Research Poland) Mariusz Osika (Hitachi Energy Research Poland)
AbstractWelding is widely used across manufacturing due to its efficient and cost-effective way of joining metals. However, a major issue with welded joints is their lower fatigue strength compared to the base material. This makes the weld seams the most vulnerable regions in a structure under fatigue loads. Therefore, fatigue life assessment should prioritize weld seams. Various methods, such as nominal stress, effective notch, and hot-spot approaches, have been developed for this purpose. Additionally, real structures often face random loading, like those in offshore platforms and railway equipment, which requires fatigue analysis in the frequency domain using statistical moments. This work presents a numerical methodology based on Finite Element Analysis. It evaluates fatigue life of welded components subjected to stationary random loading characterized by a Gaussian probability density function. The approach employs maximum absolute principal stress as the equivalent stress criterion, utilizes the hotspot method for stress calculation at the weld, and applies the Dirlik method to statistically estimate the number of stress cycles. The Dirlik technique is a tool that allows the analysis of broadband Gaussian random stresses based on their characteristics in the frequency domain. Fatigue life calculations are based on fatigue curves from international standards, such as Eurocode 3 and the International Institute of Welding (IIW), calibrated for the hotspot method. The methodology enables fatigue life assessment when the Power Spectral Density (PSD) of the input excitation is known, as often specified by customers in industrial applications. A comprehensive mathematical procedure for implementing this approach is provided. In addition, an experimental validation of the proposed methodology is included. Eight welded specimens of structural steel S355J2+N were subjected to random loading with a known PSD until failure. Four uniaxial strain gauges were positioned at specific points on each specimen to measure local strains and to compute their cycles distribution, showing consistency between the Rainflow and Dirlik methods estimations. The SN curves of the specimens with survival probabilities of 10%, 50%, and 90% were preliminarily measured as per ASTM E466, to properly calibrate the input during the random test and avoid excessive test duration. Finally, a Finite Element model of the tested specimens has been developed and the methodology described here has been applied. The measured experimental fatigue life of the specimens shows to be consistent with the Finite Element-based fatigue life predictions from the proposed methodology, which has demonstrated a conservative approach. For completeness, an analysis substituting the hotspot method with the effective notch method was conducted, revealing that the latter offers even greater accuracy in fatigue life estimation. In conclusion, the presented Finite Element methodology provides a reliable tool for assessing fatigue life in welded components under random loading, with practical applicability in industrial contexts.
Presented By Remko Moeys (ESA/ESTEC)
Authored By Remko Moeys (ESA/ESTEC)Raul Avezuela (Empresarios Agrupados)
AbstractThis paper presents the digital twin of the Large Space Simulator (LSS), as it undergoes the final stage of its development. Located in The Netherlands, the LSS is Europe’s largest thermal vacuum chamber and is used by the European Space Agency to test spacecrafts under representative space conditions: vacuum, cryogenic temperatures and powerful, dynamic solar illumination. The purposes of this digital twin are to simulate: 1. Future test campaigns (standard of specific ones) to support the training to operate the LSS facility2. The performance of the facility with future hardware or software modifications and carry out software/hardware-in-the-loop pre-tests3. Abnormal facility operation with failed equipment The digital twin consists of three layers:1. a high fidelity EcosimPro model of the LSS to simulate its physical performance 2. a virtual version of the LSS Programmable Logic Computer to execute the process control3. an identical Human-Machine Interface to the one of the LSS for the user to interact withA co-simulation manager ensures the exchange of information between the above three layers and enables digital twin-specific functionalities such as adjusting the simulation speed, upload/start/stop/save a simulation run, load pre-defined failure scenarios and virtually carry out the key procedure steps that are manually performed on the field. To maximise the representativeness of the real facility operation, the digital twin is designed to be operable from the same monitors of the LSS control room and to display the simulation results using the same data acquisition and presentation software used by the LSS: STAMP (System for Thermal Analysis, Measurement, and Power supply control), developed by Therma. The digital twin is conceived to be operable by a trainee and an instructor simultaneously.The project was kicked off in November 2023, underwent detailed validation review of the models against test data in October 2024, and is expected to be completed by mid-2025. The prime contractor of this project is Empresarios Agrupados – GHESA, who is also the owner of the modelling software used (EcosimPro).
Biography• Remko Moeys is in charge of the investment projects to upgrade the thermal vacuum facilities of the Test Centre of the European Space Agency. • Most of his work in the last 5 years has been dedicated to improving the Large Space Simulator. • In 2023, Remko initiated the idea of a digital twin of the Large Space Simulator and since then he is leading its development. • Before joining ESA he worked for 5 years on aviation and space projects in Belgium, where he led a team of material and test engineers. • Remko has a master’s degree in mechanical and aerospace engineering from the University of Southampton in the UK, where he specialised in fluid dynamics.
Presented By Luca Francesconi (Logitech Europe)
Authored By Luca Francesconi (Logitech Europe)Nuno Valverde (Logitech Europe SA) Sterling McBride (Dassault Systemes)
AbstractSimulations for predicting acoustics emissions from impulsive and transient dynamic phenomena emerging from small electro-mechanical components, as those commonly found in consumer electronics, remains both novel and challenging. Some of these components act as direct Human-Machine Interfaces (HMIs) between users and the devices being operated. One of such applications are microswitches embedded in computer mice. Besides the functional operation of the device, they also double as the primary source of both tactile and acoustic feedback to the user upon clicking its keys. The afforded feedback is a complex array of multimodal sensorial cues and includes fast transient events such as impulsive phenomena. From a component level to the integration at system architecture, both the acoustic emissions and mechanical behavior of such components constitute the main source of User Experience (UX) for mouse clicking. Predicting, through simulations, the vibro-acoustics performance of microswitches and its integration in the product, in this case a computer mouse, can enable design practices to emerge with better experiences, including addressing potential sound quality issues at earlier stages of product development. Recent advancements in simulation software and improved computational resources open up the possibility of modeling increasingly complex vibro-acoustic phenomena. The goal of this research is to understand current capabilities of simulation software to accurately predict such phenomena. This work aimed at modeling the full simulation of the vibro-acoustic response of a microswitch at the component level. This included the full operational cycle, namely the closure (push) and opening (release) switch events. This paper reports the simulation methodology adopted, from the structural and transient analysis to the acoustic radiation emerging from the component. Structural simulations involved driving pre-stressed Finite Element (FE) models with adequate and experimentally known input forces. A time-domain explicit FE simulation modeled the rapid displacement and buckling of the internal components upon operation for the full cycle. This model was experimentally validated with high-speed footage and positioning tracking of the moving switch elements under real operation conditions. The simulation analysis further explores the model’s vibration response of the mechanical system across a range of frequencies meaningful to human hearing. Derived from these structural vibrations, sound is generated from the rapid displacement of the fluid (air) surrounding the structure. The acoustic propagation is thus simulated by modeling both the internal cavities of the switch as well as the surrounding air volume for its casing. In order to enable a better efficient use of computational resources, a hybrid mesh was adopted using both FE and Boundary Element Methods (BEM). Experimental audio recordings of switch samples’ emissions were also used to compare and validate the model. It was found that fine-tuning the simulation model parameters such as damping and material properties is essential in order to accurately reflect the physical behavior. This includes sound quality metrics in both time and frequency domains as well as auralizations. The output results from the simulation can match both the spectral and time-domain characteristics of real audio within a standard measurement error.This study found validity in the simulation methods adopted and its results. It proposes and emerges with a methodology to simulate complex vibro-acoustic phenomena in similar and other applications. Overall, this paper also provides and reports a state-of-the-art perspective on the current vibro-acoustic simulation capabilities available to academia and industry.
Authored & Presented By Anas Yaghi (TWI)
AbstractThe process of powder bed fusion (PBF) is becoming well-established within the manufacturing industry. Although this manufacturing process can produce components that would otherwise be too difficult or sometimes even impossible to produce with conventional manufacturing processes, it still has challenges that need to be circumvented or solved before it may become more widely used throughout industry. The problems associated with PBF are usually categorised as manufacturing process or structural integrity problems.One very effective tool in addressing and resolving such problems is the finite element analysis (FEA). This numerical tool can reveal information and uncover trends of behaviour that would otherwise remain hidden. It has its own challenges, however, as it inherently relies on making assumptions and simplifications, which at times may compromise the accuracy or validity of the numerical outcome.It is important therefore to apply FEA correctly and effectively. Whether PBF is achieved through laser or electron beam application, the process can be modelled using the same approach in FEA. The process of metallic PBF entails the deposition of many layers of metal after sequentially melting the powder and allowing it to solidify to form the layers precisely, producing the desired component shape. This can be modelled in FEA by two different methods. One method involves modelling the deposition of the layers by activating the mesh elements of the numerical model that correspond to the metal as it is being deposited; hence, it is a gradual and sequential procedure of element activation. The other method is to alter the material properties of the metal to reflect the formation of the liquid and then the solid material from the original powder state; hence, it is also a gradual procedure that is in this case governed by the temperature history of the material, which in turn is dictated by the movement of the heat source.Both of these numerical methods have their own advantages and drawbacks. In this reported research, the two methods are described and compared by generating and simulating corresponding thermal FEA models of a cuboid made of an aluminium alloy with a single pass of deposited metal along the top. The advantages and challenges of each method are presented and explained, including degrees of numerical accuracy and stability. Conclusions are also made that can help in making the right choice when modelling is applied to the process of PBF.
Biography• Anas Yaghi obtained his BEng and then his PhD in mechanical engineering from the University of Nottingham in 1993. • He worked as a senior research fellow in the Mechanical Engineering Department at the same university until 2012. • Following that, he worked at the Manufacturing Technology Centre in Coventry as a senior research engineer for eight years. • He has worked as a principal modelling and simulation engineer at TWI (UK) since July 2023. • Anas’ main technical specialities are stress analysis, structural integrity assessment, and numerical modelling and simulation, particularly of high-value thermo-mechanical manufacturing processes, such as welding and additive manufacturing. • He has been the vice-chair of NAFEMS Manufacturing Process Simulation Working Group since 2016. • Over the years, Anas has published more than 50 technical papers in international conferences and peer-reviewed journals.
Presented By Stephan Vervoort (Hottinger Brüel Kjaer)
Authored By Stephan Vervoort (Hottinger Brüel Kjaer)Andrew Halfpenny (Hottinger Bruel & Kjaer UK Ltd)
AbstractIn traditional internal combustion engine vehicles, a small, lightweight battery is typically attached to the body in white. In contrast, modern electric vehicles (EVs) integrate the battery as a crucial part of the chassis. The primary challenges for an EV battery include its lifespan, range, and structural durability. The EV battery, comparable in weight to the chassis, significantly influences the vehicle’s dynamic response and structural integrity. It must withstand road loads, provide crash protection, and manage inertial loads from heavy cells. This complex system comprises a load-bearing chassis and numerous joints, making the EV battery not only an electrochemical system but also a sophisticated mechanical structure.The structural durability of EV batteries encompasses fatigue design, fatigue simulation, and qualification testing under structural, thermal, and inertial loads. To assess these factors, EV batteries undergo vibration profiles (including proving ground road load data test – measure vibration levels for each event) on a shaker table during testing. This setup characterizes fatigue damage and shock spectra for various duty cycles, with the option to accelerate tests if needed.Beyond structural durability assessments, quantifying warranty exposure is crucial for companies. This involves balancing excessive customer loading against the quality and strength of components. The presentation addresses the challenges and issues in the qualification process, including fatigue simulation, verification, and validation of reliability tests to account for uncertainties. The design of the battery pack is a significant uncertainty factor, with uncertainties categorized as either epistemic (reducible through better knowledge, measurement errors, and model simplifications) or aleatoric (irreducible, stemming from natural phenomena like material variability). The presentation will explore probabilistic design simulation methods and the statistical correlation of simulation and tests with small sample sizes to estimate product reliability and reduce uncertainties. It will conclude with a summary of system-level reliability simulation, highlighting quantifiable risk exposure, reliability growth analysis, and optimal maintenance planning.
Authored & Presented By Alexander Mahl (PDTec AG)
AbstractDue to global competition, reducing development costs and time to market becomes essential also but not only in Automotive industry. In order to achieve this goal, a digital twin is becoming a key factor. Digital twins are virtual representations of physical systems or processes. This enables developers to detect problems much earlier in design process (“fail early”) and also to more efficient find “optimal” solution for design of the product. In this area methods from artificial intelligence can make an important contribution. To make use of this technology, the access to reasonable training data is required. A simulation process and data management system (SPDM) is essential for the successful deployment of a credible digital twin, as it enables the management, organization and use of the extensive simulation data. Without SPDM, managing large amounts of data, ensuring data integrity and traceability would be challenging and hard to achieve. An SPDM system thus ensures that the digital twin can unfold its full functionality and can be used as a powerful tool for predictions, optimizations and decision-making.A digital twin can integrate both systems (1D) and geometry (3D) simulations to create a comprehensive picture of a real-world system. While 1D simulations are well suited to model systemic processes and dynamic flows, 3D simulations provide detailed physical insights into the behavior of individual components. 1D and 3D simulations serve as foundational elements for Digital Twins because they provide complementary approaches to modeling, analyzing, and understanding complex systems. The combination of these two simulation domains enables a complete analysis of both system-level behavior and the detailed physical properties of a product. For an SPDM system, it is therefore essential to be able to serve both simulation domains equally. In order to build a credible digital twin, it must be ensured that all simulations across all relevant domains base on the same input data like PDM/CAD and technology data (physical parameters – “real world data”). In this context traceability of the input data for each simulation is a key factor. In this presentation, the concepts and their realization with the help of a SPDM system will be demonstrated:Technical parameters are the base for simulation models across all disciplines. It is important to ensure that all input parameters are provided by a single source. Using intelligent plugin mechanism, the solver model will be populated automatically by the technical parameters. The digital twin will be tested in various “virtual test rigs” (different load cases and disciplines) and rated based on specified KPIs. A digital twin consists of simulations from multiple disciplines. In some cases the result of one simulation (from discipline A) is used as input in other simulations (in discipline B). Therefore a SPDM system should be capable to handle 1D and 3D simulations. This will give full traceability from technical input parameters to the result and vice versa - a key enabler for a credible digital twin.The paper describes the process consisting of the steps collecting technical parameters, setup simulations for multiple disciplines, analysis of results and gain the full traceability for data representing the digital twin.
Biography- completed a computer science degree at University of Karlsruhe - Research stuff at institute IMI (former RPK) – at University of Karlsruhe / KIT - Since 2008 at PDTec – currently the role “Product Management CAE” - Absolvierte ein Informatik-Studium an der Universität Karlsruhe - Seit 2008 bei PDTec – aktuell in der Rollen „Produkt Management CAE“ - Wissenschaftlicher Mitarbeiter am Institut für Informationsmanagement im Ingenieurwesen (IMI, ehem. RPK) an der Universität Karlsruhe / KIT
Authored & Presented By James Imrie (Rescale)
AbstractThe transition from on-premise high-performance computing (HPC) to cloud-based HPC is a rapidly growing trend in the field of engineering simulation. With advancements in cloud technology, organizations are increasingly moving their simulation workloads to the cloud to benefit from greater scalability, flexibility, and accessibility. Cloud-based solutions allow engineers to leverage cutting-edge hardware without the significant upfront investment required for on-premise infrastructure. However, this shift also introduces a set of unique challenges, requiring careful consideration and strategic planning to fully harness the advantages of the cloud.This presentation explores the transition to cloud-based simulation from the perspective of a seasoned simulation engineer with over 25 years of experience, including a decade of specialization in cloud environments. The focus is on providing practical advice and technical strategies for engineers looking to migrate their simulation workloads to the cloud, addressing the unique challenges and opportunities inherent to this paradigm shift.The discussion will begin by examining hardware configuration challenges, particularly how to identify and configure the optimal hardware setups to meet diverse simulation requirements. Understanding the fundamental differences between cloud infrastructure and traditional on-premise systems is critical for optimizing simulation performance. The nuances of data management in cloud environments will also be addressed, including best practices for handling large simulation datasets with regard to storage, transfer protocols, and robust security measures to ensure data integrity and confidentiality.Attention will be given to the entire simulation lifecycle, encompassing preprocessing, simulation execution, and post-processing workflows. Special emphasis will be placed on optimizing these workflows for a distributed cloud architecture to enhance efficiency and scalability. Cost optimization is another key focus, with strategies to balance computational performance against budget constraints by leveraging cloud-native tools, instance selection, and workload scheduling techniques. Furthermore, the complexities of license management in cloud-based settings will be explored, with practical approaches to efficiently allocate and utilize licenses across distributed resources.The presentation also delves into collaborative and data-sharing dynamics enabled by cloud platforms. It highlights methods for fostering seamless teamwork among geographically dispersed engineering teams and ensuring effective sharing of simulation data within and across organizations. Adapting cloud-based workflows to project-specific requirements and deadlines will be discussed, showcasing how flexibility in resource allocation and scaling can align with varied engineering objectives.Real-world examples and case studies will illustrate these concepts, offering actionable insights into overcoming common obstacles while maximizing the benefits of cloud-based simulation. By leveraging cloud technologies, engineers can achieve substantial improvements in efficiency, reduce operational costs, and accelerate innovation cycles. This presentation aims to equip engineers with the knowledge and tools necessary to navigate the complexities of cloud-based simulation and unlock its full potential for modern engineering challenges.
10:40
Presented By Anders Moe Lund (Aibel)
Authored By Anders Moe Lund (Aibel)Signe Stenseth (Open iT)
AbstractWith the growing demand for advanced engineering and simulation applications, enterprises face challenges in managing software assets. For global organizations, optimizing shared software license portfolios is essential for operational efficiency, collaboration, and compliance. Companies with teams across various regions must ensure software assets are accessible, efficiently utilized, and compliant with regulations, standards, and local laws. Aibel, a provider of engineering solutions in the oil, gas, and offshore wind industries, faced challenges with their global software portfolio. With teams in Norway, Thailand, and Singapore, Aibel’s IT leadership needed to tightly monitor and manage software license usage across time zones. They observed sustained high license consumption—even during project downtimes—indicating inefficiencies and leading to significant costs. This sparked a critical investigation: what was driving this excessive consumption, and how could it be optimized? Aibel recognized the need to delve into their software consumption patterns to uncover the root causes. While they possessed the expertise to pinpoint key metrics for reducing unnecessary expenditures, they lacked the resources for a comprehensive analysis of usage across teams and regions. Without accurate usage reports, measuring the true value of their software investments proved challenging. Through granular analysis of their license usage, Aibel identified overused licenses and, more importantly, quantified their exact licensing needs. This increased visibility enabled Aibel to achieve a remarkable 99% license utilization efficiency for a core application by implementing automated harvesting of idle licenses. This optimization led their software vendor to implement an enterprise agreement based on Aibel’s fully optimized user count, forging a collaborative partnership that benefited all parties involved. Aibel reduced operational costs, the vendor retained revenue from an optimized agreement, and users gained seamless access to resources under an “all-you-can-use” enterprise arrangement without facing denials. Comprehensive license usage analysis offers more than just a pathway to optimization and cost management. They provide companies with a structured, data-driven approach to understanding and managing their IT assets to increase business value for all parties: Users, admins, and management. By optimizing software usage, companies can enhance operational efficiency, improve collaboration across global teams, and ensure compliance with evolving industry regulations. Suppliers also value these customers that are driving maximum business value from their technology, and they are building long term relationships. They see that usage metering and optimization tools have come to stay and it can create a win win for all parties.
Presented By Morgan Jenkins (Secondmind)
Authored By Morgan Jenkins (Secondmind)Victor Picheny (Secondmind Ltd) Henri French (Secondmind Ltd)
AbstractIn the rapidly evolving field of automotive engineering, calibration is crucial for optimizing a vehicle’s control systems to achieve peak performance, efficiency, and regulatory compliance. Traditionally, calibration has been a time-consuming, manual process. However, as modern powertrain systems become more complex and interconnected, there is a growing need for innovative approaches to keep pace with technological advancements. Powertrain Calibration Engineers are increasingly adopting virtual environments to conduct these processes, marking a pivotal shift from traditional methods.Virtualizing the calibration process offers numerous advantages for OEMs and suppliers. By reducing reliance on physical prototypes, companies can significantly cut down on costs and enhance time efficiency. Virtual calibration provides unprecedented flexibility and scalability, allowing engineers to navigate vast parameter spaces with improved accuracy and data utilization. Additionally, this approach opens up new opportunities for enhancing safety and achieving comprehensive integration of diverse systems and subsystems.In this talk, Morgan Jenkins, Chief Product Officer of Secondmind, will explore the transformational impact of artificial intelligence and more specifically advanced machine learning techniques on the calibration process. Machine learning models, which can predict outcomes and suggest optimal calibration settings, act as powerful tools to streamline and refine calibration tasks. The talk will delve into how optimization algorithms efficiently explore calibration spaces without the need for physical testing, bolstered by integrating advanced simulation tools to create hybrid models that accelerate the calibration timeline.Furthermore, the presentation will discuss the use of transient and global machine learning models that extend their utility into the vehicle verification and validation phases of development. By drawing on real-world examples from EV powertrain calibration, this talk will showcase how such cutting-edge approaches dramatically accelerate development cycles, thus supporting the industry's drive for innovation and alignment with environmental goals.Ultimately, the adoption of these virtualized strategies sets a new standard for performance optimization, positioning the industry for continued advancement and success in a digital-first landscape.
Presented By Christos Tegos (BETA CAE Systems)
Authored By Christos Tegos (BETA CAE Systems)Arsenios Zoumpourlos (BETA CAE Systems)
AbstractWeld cracks play a significant role to fatigue failure, making the accurate modeling of both spot welds and seam welds crucial for structural durability. In the automotive industry, various methods, such as force-based and nominal stress approaches, are recommended by international guidelines for assessing the fatigue life of welded components. However, directly incorporating local stress provides more accurate results. The key geometrical parameters of welds are the notch radius at the weld toe and root, along with the penetration depth at the weld root. Accurately simulating the geometric notch requires extremely fine meshes to capture local stress, which can result in considerable computational demands.To tackle this challenge, an approach with superelements is introduced. In finite element analysis, the superelement method simplifies and accelerates the evaluation of complex structures by dividing them into smaller, manageable components called superelements. Each superelement is treated as an independent substructure with its behavior condensed into a reduced set of equations, which are then integrated into the global analysis. In this case, each spot or seam weld is represented as a separate superelement.The study thoroughly outlines the process of generating superelements to represent welds, accurately positioning them within the complete assembly, recovering stress components at the notch and calculating at the end fatigue life. For various types of seam welds, such as overlap, Y-joint, laser, butt and corner welds, precise placement along the weld line is particularly challenging and requires special attention. A constant weld radius is applied and von-Mises equivalent stress is used as the fatigue criterion.This work involves the comprehensive analysis of components with welds, providing a thorough and detailed description of the entire process and the challenges faced. It includes comparisons of stress results between superelement welds and those modeled with detailed geometry, aiming to critically evaluate the methodology and emphasize its significant practical benefit.
Presented By Muhammad Saeed (ARENA2036)
Authored By Muhammad Saeed (ARENA2036)Anna Scholz (Stuttgart Media University (HdM)) Matus Menczer (Northwestern University, USA) Yash Pampattiwar (University of Stuttgart)
AbstractManufacturers may suffer considerable product quality losses due to defects during production. The manual inspection used in current inspection techniques can be time-consuming, expensive, and inconsistent. This paper addresses a hybrid framework for defect detection in manufacturing. It combines synthetically generated 2D data with 3D Point Cloud representations to improve the accuracy and efficiency of quality control processes. The approach discussed in this research discusses defect patterns such as scratches, porosity, and gaps using AI-based approaches to create diverse training datasets, addressing the challenge of obtaining labelled actual defect data. In order to analyze the defects that occur during a manufacturing process, this research uses a 3D scanning tool like the Faro scanning arm to collect point cloud data of the formed part. The 2D defects are generated synthetically and integrated with the spatial richness of 3D Point Cloud data. It enables a dual-layer validation approach where defect predictions are verified and validated using actual images, improving reliability. The hybrid framework uses new methods to combine the strengths of transfer-learning CNNs for analyzing 2D data and PointNet-based architectures for 3D analysis. This helps achieve accurate classification of defects. New methods for modelling defects, like using GANs to create synthetic data, help detect complex defects effectively. This research tested this framework on industrial datasets. It connects synthetic and real-world applications and offers a scalable, semi-supervised process for detecting defects in real time. This hybrid framework reduces the need for manual inspections, improves production efficiency, reduces waste, and ensures better product quality. It is especially suitable for high-precision aerospace, automotive, and electronics manufacturing industries. This research addresses critical challenges in automated quality control by combining AI, synthetic data creation, and real-world testing. It provides a solid and adaptable solution for the future of manufacturing. This study introduces a new way to measure defects in produced parts using 2D and 3D data. The new approach offers an accurate, reliable, and efficient approach to identifying defects. It has been tested through experiments and compared with other advanced methods, such as feature extraction image analysis, showing its ability to handle large amounts of data. This method can be easily added to current manufacturing processes, potentially leading to better-quality parts and lower production costs.
Authored & Presented By Kristian Kvist (Grundfos DK A/S)
AbstractThe classical Equivalent Radiated Power (ERP) approximation, while computationally efficient for early-stage acoustic performance assessment, has known limitations, particularly at low Helmholtz numbers where it tends to overpredict acoustic power radiation. This paper introduces an improved method called "Radiation Efficiency Varying Equivalent Radiated Power" (revERP), which significantly enhances the accuracy of classical ERP while maintaining its computational advantages.The revERP method introduces a geometry-, frequency-, and vibration pattern-dependent approximation of radiation efficiency as a corrective factor for classical ERP calculations. This approximation is developed through two key innovations: First, a characteristic size of the vibrating body is approximated using a simple optimizations scheme to fit a virtual spherical surface to the vibrating body. Second, the radiation efficiency is approximated using a weighted average of analytical solutions for spherical multipoles, based on spherical harmonic decomposition. The revERP method may be viewed as a simple post-processing step, requiring no other information than what is already required in the classical ERP approximation.Enabled by the highly efficient modern algorithms for evaluating special functions (notably for the current method Spherical Hankel functions, Legendre Polynomials and Bessel functions) as well as modern algorithms for performing fast spherical harmonic decomposition, the revERP method adds negligible computation time to the classical ERP approximation. Most notably, the revERP method requires no surface integrals to be evaluated. Numerical tests on components of industrial complexity demonstrate that revERP significantly outperforms classical ERP, particularly at low Helmholtz numbers. The method converges to classical ERP at high frequencies, preserving its known accuracy in this regime. While the method shows limitations for bodies with large aspect ratios or locally resonant subsystems, it provides a valuable compromise between accuracy and computational efficiency for early-stage acoustic performance prediction in product development.The accompanying paper contributes to the field of numerical acoustics by providing a physically-principled enhancement to a widely-used engineering approximation, enabling more reliable early-stage acoustic assessments without sacrificing the computational efficiency that makes ERP valuable in industrial applications.
Authored & Presented By Arjan Wiegmink (NLR - Royal Netherlands Aerospace Centre)
AbstractSimulating Directed Energy Deposition (DED) processes can be a complex task, especially when working with high-performance materials like Ti6Al4V alloy. However, accurate finite element models require calibrated thermal and mechanical parameters, which can be challenging to obtain. Then with an accurate model a prediction of the stresses, strains and deformation can be made of any part produced with DED. This research investigates multiple calibration methods for DED process simulation of Ti6Al4V alloy, focusing on thermal and mechanical parameter extraction. Two calibration approaches are investigated: manual tuning and machine learning-based optimization. Additionally, this study aims to contribute to the standardization of the calibration process, ensuring consistency and reproducibility across different simulations and materials.The study identifies five key parameters: absorption, convection during printing convection during cooldown, emissivity of the printed part and emissivity the base plate. A custom-designed experimental setup enables printing of five parts with temperature measurements at three locations, supplemented by thermal imaging using an infrared camera. The infrared camera data is used to calibrate the emissivity of the printed part and base plate, accounting for surface roughness effects on radiative heat transfer. This set-up creates the possibility to print the thermal calibration part, the mechanical calibration part and a validation part in one job. Taking standardization into account a reusable plate in which the thermocouples are mounted is designed for the set-up. This plate guarantees that the thermal measurements will be done at the same location for each future calibration. Thus improving the consistency of the calibration process.The calibrated parameters are then used in a thermo-mechanical simulation setup using the Additive Manufacturing (AM) plugin in Abaqus. The simulation consists of two parts: thermal and mechanical. During the thermal simulation, elements are activated over time and given a heat input corresponding to the laser power. The absorption coefficient determines how much of this heat input is absorbed by a certain element, resulting in an increase of temperature. This creates a temperature gradient over time of the printed part.Once the temperatures during the printing process are known, these results are used as input for the mechanical simulation. The mechanical simulation consists of three steps: printing, cooldown, and declamp. The simulation results are compared to experimental data in the form of temperature measurements and deformation of the part, allowing for the calibration of the parameters. Our results show that both calibration methods can accurately predict thermal and mechanical behavior, but machine learning-based optimization outperforms manual tuning in terms of time required for the calibration. Although it does require more simulation results to perform. Automation of that process is possible and thus the effort the calibration costs is reduced.This study highlights the importance of accurate calibration of parameters such as absorption, convection during printing and cooldown, and emissivity of the printed part and base plate. Lastly it will be discussed how our findings can be applied to improve DED simulation for other materials, and how standardization of the calibration process can facilitate the development of more accurate and reliable simulations across different materials and applications.
Presented By Carsten Schmalhorst (AVL Deutschland GmbH)
Authored By Carsten Schmalhorst (AVL Deutschland GmbH)Josef Ruetz (AVL Deutschland GmbH) Matteo Fritz (AVL Deutschland GmbH)
AbstractThis study investigates the virtual optimization of battery modules, focusing on both manufacturing processes and operational performance. The physical connection between the battery cell, the cooling system, and the housing is crucial for optimizing the overall performance of the battery pack. Therefore, a thorough examination of the adhesive used to bond these components is essential.Manufacturing Process Optimization:Battery cells are bonded to the housing and a carrier plate, which often includes an integrated water-cooling circuit. The bonding adhesive significantly impacts cell heat dissipation, module strength, and retention. This process is simulated using the commercial CFD software PreonLab, which is based on the Smoothed Particle Hydrodynamics (SPH) method. Through simulation, production time and adhesive distribution can be optimized.The investigated virtual manufacturing process includes:Applying adhesive to the panel – assessing the time required for the process and adhesive distribution.Applying the cells to the plate – evaluating the forces on the cells and the final distribution of the compressed gap filler.Different adhesive application variants are tested virtually, resulting in varied outcomes for production time and thermal conduction between the cooling plate and cells.Battery Performance Optimization:The charging time (e.g., 20% to 80% SOC) is often used to assess the performance of electric vehicles. In this study, the impact of different gap filler variants on the fast-charging behavior of battery modules is evaluated using AVL CFD software AVL FIRE M. Through the spatial resolution in CFD, we enable monitoring localized thermal and electrochemical effects, ensuring optimal battery health management and fast-charging performance.Using a physical, detailed 3D battery model as a virtual twin, the behavior of each battery cell and the entire system was analyzed. This investigation reveals how different bonding options affect cell temperatures, fast-charging behavior, and thermal management. It demonstrates that simple 1D system simulations with idealized models are insufficient for accurately predicting fast charge behavior in complete battery packs.Conclusion:This investigation provides valuable insights into optimizing production processes and predicting fast-charging times in battery packs. By employing CFD methods, we enhance the efficiency of thermal management systems through spatial resolution, enabling precise monitoring of localized effects. Additionally, the use of virtual development techniques significantly reduces costs and streamlines the early stages of development through virtual testing and optimization.
BiographySolution Area Expert, Virtual Development Powertrain Performance and Thermal AVL Advanced Simulation Technologies Dr.-Ing. from Chair of Fluid Mechanics, Technische Universität München, Dissertation topic: simulation and optimization of turbomachinery. Joined AVL in 2011 as Senior CFD Analyst, focusing on 3D CFD of exhaust aftertreatment systems and ICE combustion simulation, injector nozzle flow. Solution Area Expert working on new applications, gear box lubrication and thermal behavior, vehicle soiling and ICE performance & emissions
Authored & Presented By Marc Vidal (CADFEM Germany GmbH)
AbstractIn recent years, the hype surrounding artificial intelligence (AI) has opened many doors, but many companies are still stuck in the orientation phase. Although they would like to jump on the AI bandwagon, they often struggle to identify valuable use cases and achieve tangible results. To overcome this phase, it is crucial to approach AI implementation in the right order: starting with a solid business case and the selection of the appropriate technology. The combination of AI with engineering expertise can lead to significant advancements. It enables a safer and faster quotation phase, it allows to accelerate time-consuming development steps and it is key to drive a "Shift Left"; approach in order to explore variants earlier in the development process.Although this sounds logical, several technical challenges still need to be addressed:-Handling a limited number of training data and various data sources from experiments and simulations-Robust validation methods for AI predictions to support decision-making-Efficient and traceable integration of AI-based tools and methods into thedevelopment processWe will discuss the following methods to address these challenges using examples from fluid dynamics, process automation, and HF technology:1. ML Tool Stochos: Using a machine learning (ML) model specifically suited for engineering applications, which includes confidence values and can process any type of numerical data.2. Management of CAE Data for and from AI Use with Ansys Minerva: For training, existing data, tests or specially created simulation data are used. The trained AI is thereby linked to the knowledge available at the time of training andneeds to be versioned. It will be used to provide predictions, optimizations, or applications for users outside the group of simulation experts. As a matter of fact the knowledge base used for training changes over time, resulting in new versions of the trained models and the resulting applications. To answer which decision was made based on which AI model and training status, CAE data organization is necessary. This starts with the structured storage of knowledge from previous CAE projects, includes traceability of the training, and extends to the provision of managed apps whose use is traceably documented.Conclusion:The successful integration of AI into engineering processes requires a strategic approach that begins with a clear business case and includes the selection of the right technology for the task. These practical examples illustrate how these challenges can be addressed, from the actual ML tool to a traceable integration into the development process.
BiographyDegree in Civil Engineering from the TU Munich With CADFEM since 2002 Experience in customer support, sales and business development Today: advise and support of customers in the integration of CAE into their development processes
Presented By Andres Rodriguez-Villa (Techsoft 3D)
Authored By Andres Rodriguez-Villa (Techsoft 3D)Fredrik Viken (Tech Soft 3D)
AbstractThe migration of legacy desktop CAE applications to modern web-based solutions represents a significant leap forward in engineering collaboration, data accessibility, and solution deployment. However, a key challenge in this transformation is balancing the benefits of modern web technologies with not only the security of data on the web but also the substantial investment required to fully migrate existing complex desktop applications built and hardened over sometimes decades. The first solutions to appear for remote CAE were remote-desktop approaches – running the legacy desktop application on the server and instead of rendering it on a local screen, streaming those pixels to a remote one. This technology requires no transformation or rewriting of the legacy desktop application, needs a limited investment to implement and almost instantly provides the full feature set of the legacy application. However, it does not take full advantage of the web’s collaboration and sharing opportunities and does come with some caveats such as security issues, perfectible user experience, scalability and operational cost.At the other end of the scale, browser-based solutions – true web applications - that allow to visualize, share and analyze remote CAE result databases require a compelling investment to rebuild the legacy feature set and user experience. The reasons to engage in such a global change are to work around the drawbacks of remote desktops: enhance the end-user user experience through client-side rendering, mitigate security issues by ensuring only the active view data leaves the organization, ensure scalability and finally, considerably reduce the server cost by eliminating the need for GPUs.This paper presents a novel software service installed on the server side that aims at reducing the cost of moving any legacy desktop CAE application, by allowing the full reuse of the data processing layer of the application – the “application logic” – and then progressively streaming the visualization data it receives to any number of clients for in-browser rendering. Such an approach ensures the preservation of existing features and workflows that have been built, hardened and enhanced over sometimes decades.After a review of existing remote CAE solutions with a focus on remote-desktop approaches, this article delves into the new streaming service’s architecture and describes its technical implementation. A closer look is taken at its interface with the application logic and other potential sources of CAE data, as well as how it efficiently conveys data to a remote WebGL engine for client-side rendering. Finally, a simple example showcases the flow of data and the reusability of an existing CAE application, providing a roadmap for organizations seeking to modernize their desktop products.
11:00
Presented By Manuel Morales (Resemin)
Authored By Manuel Morales (Resemin)Fernando Diaz Lopez (RESEMIN) Jose Pereiras (Dassault Systemes) Srikrishna Chittur (Dassault Systemes)
AbstractRESEMIN is a custom mining equipment manufacturer from Peru. They build wide range of equipment for underground mining operations around the world. They adopted unified modeling & simulation approach in 2009 in order to overcome their business critical engineering challenges. Key challenges experience by RESEMIN include: i. field failure of boom arm - resulting in warranty costs and expensive operational breakdowns for their Customers ii. expensive physical testing for safety of operator cabin - which is mandatory for meeting safety compliances (ROPS & FOPS) iii. Increasing complexity of machines increasing simulation time, thereby limiting design & simulation iterations with the shrinking product development timelines. In this session, Structural Analyst and Engineering Manager from RESEMIN will discuss how they accelerated innovation of underground mining equipment and overcame the challenges mentioned above with unified modeling & simulation approach across structural, fluid, thermal, electromagnetic and plastic injection molding simulation workflows. This session will cover technical details of structural simulation (FEA) workflows for boom strength and durability analysis, virtual Roll-Over-Protection-Safety ( ROPS) test, virtual Falling Object Protection Saftey (FOPS) test, and automation of component level simulations for Designers. Computational Fluid Dynamics (CFD) analysis methodology for HVAC and Operator cabin thermal comfort, electromagnetic simulation workflow for electromagnetic compliance for IoT, and plastic injection molding simulation for assessing manufacturability of head lights will be discussed in detail. Lastly, design exploration done with automation as well as integration between modeling and simulation for democratizing simulation for Designers at RESMIN will be highlighted in this session.Processes and rationale behind geometry preparation, identifying meshing best practices, operating scenario definition ( how they were obtained), solver choices and extraction of KPIs from results post-processing will be covered during this session. Presenters will discuss how RESEMIN overcame some of the challenges related to reducing field failures, expensive regulatory costs, shortening product design timelines, servicing costs, large complex simulation models and hardware limitations.
BiographyManuel Edson Morales is a Mechanical Designer and Structural Analyst at RESEMIN. He graduated as a Mechanical Engineer from the ENGINEERING NATIONAL UNIVERSITY in Lima-Perú. For seven years Manuel worked with SOLIDWORKS resellers in Peru and Colombia before joining RESEMIN six years ago. Manuel’s expertise stems from working with SOLIDWORKS solutions for the past 17 years and with 3DEXPERIENCE Simulation solutions for the last 4 years. Since 2013 he is a SOLIDWORKS Elite Applications Engineer and is now on top a certified SOLIDWORKS Mechanical Design and Simulation Expert.
Presented By Christine Schwarz (Noesis Solutions)
Authored By Christine Schwarz (Noesis Solutions)Jiajun Gu (fleXstructures Gmbh)
AbstractThe increasing complexity of modern automotive systems is driving the need for advanced solutions to optimize the design and routing of sensor cables, a critical component in ensuring vehicle safety, performance, and efficiency. Traditional manual methods, such as cable design in CAD and validation through physical prototypes are increasingly insufficient for addressing the challenges posed by intricate geometries, constrained design spaces, and stringent performance requirements. To address these challenges, this study introduces an innovative approach that integrates automation and AI-supported optimization algorithms, reducing time and resources, while enabling an efficient and intelligent design cycle. In this case study, the workflow developed to automate cable routing while adhering to constraints such as minimum bending radii, safe distances from components, and load minimization on cables and clips, highlights the integration of automation and optimization tools with complex cable simulation software. By simulating the nonlinear behaviour of cables in the design environment, this approach eliminates the need for physical prototypes, significantly reducing design cycle times and resource consumption. The core of this advancement lies in the application of evolutionary optimization algorithms, enabling exploration of a high-dimensional design space defined by variables such as clip positioning and cable segment lengths. The optimization workflow consistently identified efficient and feasible solutions, satisfying all constraints, and achieving significant improvements over initial manual designs.The results illustrate substantial improvements over traditional methods. Constraints that were previously challenging to meet are addressed, demonstrating the system’s ability to manage complex requirements while ensuring compliance. This integration of automation and AI not only accelerates the design process but also improves reliability by systematically respecting all constraints.This study highlights the potential for automation and optimization to transform the cable design process in the automotive industry. By replacing manual trial-and-error methods with data-driven workflows, engineers can achieve faster development cycles and more efficient, robust designs. This approach represents a shift in how critical components are engineered, contributing to the development of smarter and more efficient vehicle systems.
Authored & Presented By Tim Kirchhoff (ihf Ingenieurgesellschaft mbH)
AbstractIn mechanically stressed components, weld seams are usually the weak points critical to failure, especially under alternating loads. The necessary strength verification is carried out according to rules such as the FKM guideline, the IIW recommendations or the Eurocode.The evaluation of weld seams is a challenge, even in modern simulation-driven product development. Due to the special properties of the weld seams (e.g. sharp and quite irregular notches), the stresses for the verification must be determined using one of the concepts developed for this purpose, for example as nominal, structural or notch stress.With the structural stress concept, the seam is represented in a simplified manner in the FE model and the stress for the verification is extrapolated from the stresses on the surface before the weld toe. The Hot-Spot concept from the IIW recommendations is widely used for this purpose and is also referenced in other guidelines.With the notch stress concept, a fictitious notch radius is introduced in the simulation model at the weld toe and in the weld root. The stress for the verification can then be determined directly in the notch radius.The notch stress concept therefore requires a comparatively high modeling effort but is also suitable for the verification of complex weld seam situations that cannot be evaluated using other methods.The areas of application of the different stress concepts are presented and advantages or disadvantages of the individual concepts are discussed.A practical challenge in the verification of weld seams is also the selection of the critical point, especially when the component is subject to multiple alternating loads. This can be alleviated by an automated calculation of all verification points along a weld seam.For this purpose, approaches to automating the stress determination according to the structural or notch stress concept are presented, which were implemented by ihf. It is shown how the stress components depending on the local weld seam direction can be determined according to the requirements of the different guidelines.
Presented By Dirk Hartmann (Siemens Industry Software)
Authored By Dirk Hartmann (Siemens Industry Software)Alexandru Ciobanas (Siemens Industry Software S.R.L.)
AbstractDigital Twins, tightly integrating the real and the digital world, are a key enabler to support decision making for complex systems. For example, they allow informing operational decisions upfront through accepted virtual predictions of their real world counter parts’ behavior. Many applications, such as thermal monitoring of batteries, require real-time predictions. Deriving appropriate real-time models if often a complex and tedious task.Therefore, surrogate modeling is a key technology in many real-time Digital Twin applications that require complex predictions. For highly non-linear processes, classical Reduced Order Modeling techniques, such as Krylov methods, are limited as well as require full access to solver internals. While Machine Learning technologies excel in these cases, their high data demands and lack of explainability limit their practical use.In this presentation, we review an active-learning-based Operator Inference approach to surrogate modeling. This method is built on Proper Orthogonal Decomposition combined with regression techniques. It explicitly leverages the structure of the underlying systems, making it fully explainable. Active learning ensures minimal data requirements, making the approach scalable for industrial applications.We will present the basic concept of this technology and demonstrate its application in a real-world battery thermal management case. The proof of concept was realized within a novel Surrogate Modeling Sandbox Framework for STAR-CCM+. We also quantify the storage and compute requirements for realistic applications with millions of degrees of freedom, which are significantly smaller compared to classical Deep Learning methods. This underlines the practicability of the approach for real world applications. The corresponding real-time models are simple explicit Ordinary Differential Equations, which can be exported via the Functional Mock-up Interface standard as well easily directly implemented in system simulation and control tools. At the same time Proper Orthogonal Decomposition allows to retrieve the full 3D fields at any point in time by means of simple matrix multiplications.References:[1] Hartmann (2021): Real-time Digital Twins – https://doi.org/10.5281/zenodo.5470479 [2] Zhuang, Lorenzi, Bungartz, Hartmann (2021): Model Order Reduction based on Runge-Kutta Neural Network – https://doi.org/10.1017/dce.2021.15 [3] Zhuang, Hartmann, Bungartz, Lorenzi (2023): Active-learning-based nonintrusive model order reduction – https://doi.org/10.1017/dce.2022.39 [4] Uy, Hartmann, Peherstorfer (2023): Operator inference with roll outs for learning reduced models from scarce and low-quality data – https://doi.org/10.1016/j.camwa.2023.06.012
Authored & Presented By Dami Bok (Hyundai Motor Company)
AbstractIn the case of NVH performance evaluation, there have been limitations in applying virtual vehicle development so far. The first reason is the technical issue of implementing 3D visualization of noise in a virtual environment. The second reason is the difference between the noise perceived by drivers in a vehicle and the reproduced noise in a virtual environment.As a solution to these issues, recent developments in spatial audio technology and 3D sound color mapping have received attention. By utilizing spatial audio technology, it is possible to provide the same experience to the driver even without physical driving. With the use of visualization technology, it becomes easier to recognize major noise sources and noise paths.In this paper, Ambisonics was used as the 3D sound format for implementing spatial audio and color mapping. Ambisonics is a technology that converts spatial audio information into a three-dimensional sound field composed of directional characteristics based on Spherical Harmonics. It is used for recording or reproducing immersive sound. As the order of the spherical harmonic increases, the number of required listening points increases, but it also allows for more sophisticated sound representation. For recording Ambisonics beyond the 3rd order, it is common to use microphone arrays in a spherical arrangement to capture 3D sound. This allows for the measurement of higher-order Ambisonics. In the case of the Eigenmike-32, where 32 microphone capsules are distributed on a surface of a 42mm radius microphone array, the effective frequency range for applying third-order Ambisonics is limited to 700Hz to 8kHz, which is insufficient to cover the primary frequency range of interest in the development of NVH performance for completed vehicles, which is from 20Hz to 1kHz.In this study, we conducted NVH virtual performance evaluation using a new concept helmet microphone array that is suitable for measuring noise in the frequency range of 20Hz to 1kHz, which is the range of interest for vehicle driving noise. The helmet microphone array has the advantage of being able to measure sound from the driver's seat most realistically during driving, and its larger radius of 95mm allows for a 2.3 times larger size compared to the commonly used Eigenmike-32, making it suitable for measuring low-frequency noise below 1kHz.In this study, we have developed a process to generate spatial audio in Ambisonics and Sound Particle Separation (SPS) format based on sound captured using a helmet microphone array. This process also includes post-processing techniques to make the spatial audio audible and visualizable. The raw data used in this study was acquired through measurements or simulation including a helmet microphone array model, which provided us with the extracted impulse responses. We then applied auditory filters to the extracted data and synthesized them to transform them into spatial audio format. Through this process, we were able to implement auditory rendering and visualization techniques.
BiographyEducation: Bachelor's degree in Mechanical Engineering, Sungkyunkwan University Experience: NVH CAE at Hyundai Motor Company, specializing in acoustic simulation and spatial sound Research Interests: Acoustic simulation, Spatial sound, Virtual test environment Presentation Topic: Development of Road Noise Spatial Sound and Sound Map Implementation Technology Using a New Concept Helmet Microphone Array
Presented By George Scarlat (University of Maine)
Authored By George Scarlat (University of Maine)Britt Helten (Advanced Structures & amp Composites Center) Felipe Robles-Poblete (Advanced Structures & amp Composites Center)
AbstractThe paper presents an initial investigation into using fast-running surrogate models to approximate the numerical simulation of a Material Extrusion (MEX) additive manufacturing process. This study’s goal was to explore several off-the-shelf surrogate model solutions, within either a commercial code or open source, that could mimic a long-running MEX process simulation of thermoplastic parts, with a reasonable accuracy at a fraction of the required time.The Additive Manufacturing (AM) process was simulated as a sequentially coupled thermo-mechanical analysis within a Finite Element (FE) environment. Several parts were considered in this study, ranging from simple benchmark shapes (purposely designed to showcase specific deformation behavior after cooling) to a more complex geometry. The underlying assumption here is that the part displacement field during the whole AM process is contiguous, meaning no cracks or other discontinuities would occur throughout. Temperature-dependent material properties were considered where appropriate for the neat Polyethylene Terephthalate Glycol (PETG) material used in this study.Several process and material parameters (e.g. convection coefficient, deposition temperature, thermal conductivity, coefficient of thermal expansion, elastic modulus) were considered as input variables and a Latin Hypercube Sampling (LHS) of the design space was selected for generating training data for the surrogate models. The output variables of interest were part deformations and stresses at various locations, at the end of the cooldown phase. Several popular techniques (Response Surface Models, Radial Basis Functions, Kriging and Artificial Neural Networks) were used for building the surrogate models, and statistical tools were used to compare their results with the corresponding FEA models.Although experimental validation is not a focus of the current study, some of the shapes analyzed here were physically manufactured to provide basic confidence in the resultant deformations, and this aspect is presented in the paper as well. The paper ends with conclusions, recommendations and ideas for expanding such surrogate models further, to mimic the MEX process simulation of large-scale fiber-filled thermoplastic parts.
BiographyMr. George Scarlat is a Sr. Research Engineer within the Advanced Structures and Composites Center at the University of Maine, USA. His current research interests are focused on performance prediction through process simulation and design space exploration of large-scale additively manufactured composite structures. Mr. Scarlat holds a MSc degree in Mechanical Engineering from Clemson University, SC, USA and has previously worked in industry for the past two decades in the areas of design, simulation and optimization of various manufacturing processes, composite structures, and multi-body dynamic systems. He held prior positions as Sr. Research Engineer with Albany Engineered Composites, a manufacturing company producing 3D-woven composite parts for the aerospace and defense industries, and as Engineering Specialist in the Crashworthiness and Occupant Safety group at Dassault Systemes SIMULIA Corp.
Presented By Johan Vanhuyse (Siemens Industry Software)
Authored By Johan Vanhuyse (Siemens Industry Software)Clement Bertheaume (Siemens Industry Software NV) Mike Nicolai (Siemens Industry Software NV)
AbstractThe interest in fuel cell vehicles has been recently increasing as they offer an alternative to battery electric vehicles for reducing emissions, in particular in cases where the boundary conditions for battery electric vehicles (availability and capacity for charging) are not met. Fuel cell vehicles rely on a complex energy management system to efficiently utilize the power generated by the fuel cell stack, manage the battery state-of-charge, and optimize overall vehicle performance. Designing an optimal energy management strategy for a fuel cell vehicle is however a challenging task due to the highly nonlinear and coupled dynamics involved, as the efficiency of a fuel cell depends not only on the amount of power which is requested, but also on the consistency of this demand. Reinforcement learning is a powerful machine learning technique that has shown great promise in solving complex control and decision-making problems. Unlike traditional control approaches that rely on detailed mathematical models, reinforcement learning allows an agent to learn an optimal control policy by interacting with the system and receiving feedback in the form of rewards or penalties. This paper investigates the application of reinforcement learning to determine an efficient and robust energy management strategy for a fuel cell vehicle. The goal is to maximize the vehicle's overall efficiency and driving range while maintaining the battery state-of-charge within desired limits. A reinforcement learning agent is trained to control the power delivered by the fuel cell to the battery, based on the current vehicle operating conditions such as power demand, battery state-of-charge, and fuel cell efficiency. By comparing the reinforcement learning approach against a traditional rule-based control strategy, this paper demonstrates that the RL agent is able to decrease the hydrogen consumption with 4% while the process to develop a reinforcement learning-based controller also requires much less manual effort. The results showcase the potential of reinforcement learning to enable adaptive and robust energy management for fuel cell vehicles. The reinforcement learning agent is able to learn a control policy that adapts to changing driving conditions and efficiently coordinates the fuel cell and battery subsystems. This leads to significant improvements in overall vehicle efficiency and driving range compared to the rule-based approach. Furthermore, the nature of reinforcement learning allows the energy management strategy to be quickly updated and deployed on different fuel cell vehicle platforms, reducing the engineering effort required.
BiographyDr. Johan Vanhuyse obtained a Master’s degree and PhD in mechanical engineering at KU Leuven (Belgium). He currently works as a Product Manager for Simcenter Studio at Siemens in Belgium. In this role he focuses on AI-based Generative Engineering and reinforcement learning.
Presented By Marianthi Dimoliani (BETA CAE Systems)
Authored By Marianthi Dimoliani (BETA CAE Systems)Antonis Perifanis (BETA CAE Systems SA)
AbstractIn the rapidly evolving landscape of computer-aided engineering (CAE), the efficient management and accessibility of simulation and test data remains a persistent challenge. Traditional approaches often require stakeholders to navigate multiple specialized software platforms, manage complex installation procedures, and contend with static reports that fail to capture the full complexity of engineering analyses.This paper presents an innovative web-based framework where data is shared and visualized across engineering teams in interactive html dashboards. The processes presented facilitate data sharing in two directions: one for sharing simulation results with design teams and physical testing departments, and another for making physical test data accessible to CAE engineers for correlation studies. For simulation data dissemination, results are post-processed with tools tailored to various analysis types, such as evaluation of Occupant Injury Criteria, Pedestrian load cases, NVH load cases or CFD cases. These tools automatically extract and store key report items—including curves, values, images, videos, and specific 3D model results—in a structured filesystem format. The data organization is handled internally by the tools, requiring no additional user intervention. The analyst can then utilize available pre-built dashboard templates, populating them with stored data of the simulations. These dashboards can be easily tailored to different user roles (managers, analysts, etc) providing each time the required level of detail and can be shared with colleagues, designers, or test engineers by simply sharing a link, allowing recipients to view them in their browser. In the opposite direction, the framework also addresses the challenge of test data integration by implementing direct connectivity to various data sources, including SQL servers, ASAM-ODS data servers, and conventional file formats (MS-Access, XLSX, CSV). This capability facilitates real-time correlation studies between simulation and test results, establishing a unified platform for cross-functional communication.The visualization of the data via real-time updated html pages moves teams beyond static reports into a new era of interactive data exploration and effortless sharing. Apart from data browsers and interactive plot charts it is even possible to access and view 3d models with their results giving far more valuable insights.As the volume and complexity of engineering data continue to grow, this solution provides a scalable platform for transforming raw data into actionable engineering insights, ensuring that valuable engineering data reaches everyone, ultimately accelerating the product development cycle.
Presented By Michael Schlenkrich (Hexagon Manufacturing Intelligence)
Authored By Michael Schlenkrich (Hexagon Manufacturing Intelligence)Prafulla Kulkarni (Hexagon Manufacturing Intelligence)
AbstractIn a world marked by rapid technological evolution and a constant influx of information, iterative simulation has emerged as a crucial tool for organizations. It enables them to test, refine, and optimize processes and products in a controlled, repeatable manner. However, this approach presents three key challenges: access to resources, process optimization, and data leverage. Addressing these challenges effectively can significantly enhance the utility and impact of simulations.Firstly, ensuring ample hardware and software resources is fundamental to running flexible, on-demand simulations. Cloud computing offers a scalable solution, allowing organizations to access necessary computational power without significant upfront investment. By utilizing cloud-based platforms, simulations can be dynamically scaled according to specific needs, ensuring efficiency and cost-effectiveness. This flexibility allows to run complex simulations on-demand, adapting quickly to changing requirements and optimizing resource usage.Secondly, streamlining iterative processes is crucial for enhancing simulation speed and efficiency. Automation plays a pivotal role by reducing manual effort and minimizing error. Automated pipelines can handle repetitive tasks, ensuring consistent and efficient simulation execution. Additionally, adopting agile methodologies enables teams to quickly respond to new information and changing requirements, providing a significant competitive advantage.Finally, the abundant data generated from simulations must be effectively harnessed for informed decision-making. Implementing AI and machine learning (ML) solutions is vital in this endeavor. These technologies can sift through large datasets, identifying patterns, trends, and insights not immediately apparent. AI/ML can also optimize simulation models, improving their accuracy and predictive power over time.To tackle these challenges, we propose an innovative approach centered around leveraging cloud computing and utilizing AI/ML solutions for data training with simulation management. A key component of this approach is the concept of Data Gravity, where applications move to the data rather than transferring data to applications. This strategy facilitates a platform-level solution where cloud resources, automation, and advanced analytics work together seamlessly at a centralized location. The synergy of cloud computing, AI/ML, and Data Gravity ensures that simulations are not only more efficient but also more insightful, paving the way for data-driven decision-making and strategic growth.
11:20
Presented By Jayant Pawar (Dassault Systèmes)
Authored By Jayant Pawar (Dassault Systèmes)Girish Patil (Dassault Systems) Rajkumar Natikar (Dassault Systems)
AbstractSustainability is a critical challenge for industries today, primarily due to the extensive use of plastic in component design. Companies are actively seeking ways to minimize plastic consumption by optimizing component thickness through Finite Element Analysis (FEA). Fuel tank is an important component of a vehicle that undergoes through different loading conditions. The key challenge lies in determining the ideal thickness distribution that can withstand desired loading conditions while achieving this optimization in shortest time possible. Many design iterations might be performed during early concept design stage that increases the go to market time of any product. Extrusion blow molding is very a non-linear and un-predictive simulation that requires Finite Element Analysis expertise. All the subsequent simulations are dependent on thickness distribution achieved at the end of Extrusion Blow Molding simulation.Recent advancements in data-driven technologies, particularly machine learning (ML), offer a promising solution to expedite the design exploration process while conserving computational resources. To address the challenge of exploring wider design space in short span of time, Dassault Systemes has developed a novel methodology that employs a parametric machine learning (ML) physics model to efficiently predict the thickness distribution after blow molding and stress contours of subsequent structural load cases. By leveraging a neural network-based model, the richness of 3D simulation results is maintained while significantly reducing execution time, facilitating quasi-interactive design exploration and optimization. The ML physics model undergoes an optimization loop to identify the optimum thickness distribution of a fuel tank to maintain sufficient thinning in EBM while maintaining safety standards of structural loading. Machine learning can help predict Finite Element Analysis validation results faster in early concept design stage to reduce overall product development cycle. This paper highlights the potential of combining advanced ML techniques with traditional simulation methods to achieve design optimization quickly and efficiently.
Presented By Wolfgang Krach (CAE Simulation & Solutions Maschinenbau Ingenieurdienstleistungen)
Authored By Wolfgang Krach (CAE Simulation & Solutions Maschinenbau Ingenieurdienstleistungen)Walter Vonach (CAE Simulation & amp amp Solutions)
AbstractThe load carrying structure of modern railway coaches is designed using aluminium profiles in case of passenger coaches or mainly steel plates in case of freight waggons. Both structures are fabricated by welding the parts together leading to a large number of welds which have to withstand the imposed loads. The actual paper will focus on a code conforming fatigue assessment of the structures. Although most of the welds used are full penetration seam welds fillet welds have to be used in some areas and would be beneficial for cost savings.Accordingly, the use of fillet welds and especially one-sided fillet welds and their assessment is of large economical interest. There is a commonly used practical rule for fillet welds using a weld thickness a of 0.7-times the sheet thickness t. Accordingly, the stresses in the weld are higher than in the sheet. An efficient FE modelling scenario however does not model each weld with the according thickness in a design phase.In addition, a fillet weld, is subjected to a local bending load when a global tension load is applied to the attached sheet of a T-joint connection. This bending moment, caused by the eccentricity of the weld to the joint induces additional stresses to the weld which have to be taken into account for a fatigue prediction. Therefore, correct weld stresses in fillet welds have to be calculated in a postprocessing step taking into account the reduced cross-section and the additional bending.The realistic effect of the additional bending of the weld depends on the actual geometric circumstances. The effect will be low when welding a tube with a small diameter onto flat plate since local bending is suppressed by the curvature. On the other hand, the bending will be significant when a flat plate is welded onto a flat sheet or in case of a large rectangular tube on a flat plate.The influence of the local bending will be shown for these examples.A modelling and postprocessing procedure automatically generating the welds (from parts or part properties) and assessing those welds according to Eurocode and or DVS1612 and DVS1608 respectively will be presented for an aluminium passenger coach and a steel freight wagon.
Presented By Michelle Quan (Autodesk)
Authored By Michelle Quan (Autodesk)Jenmy Zhang (Autodesk)
AbstractIn the rapidly evolving landscape of engineering certification, traditional methodologies are becoming increasingly inadequate as industries advance towards more innovative and high-performance designs. Existing certification workflows in fields such as automotive and aerospace rely heavily on factors-of-safety estimations and practices derived from historical data and physical testing, which can be costly and time-consuming [1] . Derivative methods, outlined in standards like NASA-STD-5001B [2], depend on archives of legacy data, which may not adequately address novel materials and designs emerging from advancements in structures and manufacturing techniques. Moreover, while test-based certification in standards like ASTM D3039/D3039M-17 [3] provides more accurate predictions of material behaviour through prototype testing, it is limited under specific environmental conditions and becomes inadequate for complex system interactions and greater variability in operating environments [4]. These methods prescribe conservative approaches, including factors of safety and prototype testing, often resulting in unnecessarily heavy structures that do not optimize performance or guarantee safety [4]. Digital twin technology offers a powerful solution to traditional certification challenges by creating virtual representations of physical assets that adapt using real-time data. Our study introduces a digital twin model developed by Autodesk Research that uses strain gauge signals to generate dynamic stress outputs across an entire system, even with unknown stimuli. By integrating sensors into a composite load-bearing structural member of an Unmanned Aerial Vehicle (UAV), a test rig is created to mimic in-lab conditions thorough testing. This approach allows for real-time stress field visualization through principled physics-based data extrapolation from strategically placed sensors.Our study consists of three components. The first component employs Autodesk Nastran as a Finite Element Analysis (FEA) solver, allowing for the development of a fully functional digital twin without the need for an in-house FEA solver. The second component involves a three-way validation between Nastran, the digital twin algorithm, and physical sensors. By applying controlled loads to the test rig, we can replicate the conditions in both Nastran and the digital twin, then compare resulting stress fields against physical sensor data collected from the test rig. In the third component, the comparison between Nastran, the digital twin, and sensor data are repeated without assuming knowledge of the load. Instead, the digital twin algorithm is augmented with the ability to infer the operating conditions from in-situ sensor data. This approach helps close the correlation gap between sensor data, simulation, and the digital twin, informing critical design decisions by providing insights into the accuracy of assumptions and predictions. Our goal is to integrate the structural member from the test rig into the UAV system using modal reduction techniques. This allows for visualization and simulation of the structural member using a robust, physics informed digital twin accurately replicating in-flight conditions. By repeating this process for all components of the UAV, we aim to create a fully developed sensorized digital twin, enabling collection of real-time dynamic data and assessment of structural responses during operation. Unlike conventional testing, which often relies on assumed maximum loads with uncertain accuracy, this method reduces uncertainty by delivering estimated loads and accurate structural responses. This clarity enhances the understanding of safety margins and addresses current challenges in design and certification.The monitoring capabilities of this digital twin helps streamline the certification process, potentially eliminating traditional cyclic testing and iterative evaluations. As confidence in system behavior grows, this approach opens innovative design possibilities for composites and other materials, moving beyond reliance on archived data and physical testing while improving simulation accuracy. This methodology establishes operation-specific conditions tailored to unique use cases, paving the way for advanced designs in highly sensitive applications.
BiographyMichelle is an Associate Research & Design Engineer at the Toronto Technology Centre, part of Autodesk Research. She specializes in design, prototyping, and advanced manufacturing, with a focus on FEA simulation. With a background in mechanical engineering, mechatronics, and aerospace design, Michelle contributes to digital twin research by leveraging her expertise in FEA simulations, FEM, generative part design and optimization, and design for manufacturing.
Authored By Svetlana Jeronimo (Dassault Systèmes Deutschland GmbH)Faron Hesse (Dassault Systemes) Florian Mayot (MAHLE)
AbstractAutomotive fans for engine bay cooling of internal combustion powered engines (ICE) or battery-powered motorized engines are a critical component of vehicle performance. Aside from their cooling capabilities that is assessed by the throughput of mass flow that they achieve, their acoustic behavior is also of paramount concern. The reason for this is that passenger vehicle noise perception is a measure of vehicle comfort. Having said that, fan acoustic behavior is even more critical for battery-powered vehicles where the fan’s acoustics is no longer masked by the internal combustion powered engines’ loud noise derived from the piston motion and associated combustion process. Dassault Systèmes’ lattice Boltzmann method (LBM) software is able to perform direct noise computations and, thereby, assist in developing low noise fans using parametric design-of-experiment (DOE) studies that identify the fan geometry with best performance and acoustic characteristics. But, the question arises, can machine learning be coupled to these physics-based LBM acoustics simulations to accelerate the DOE study for low noise? The envisioned process is to train simple feed-forward neural networks (also known as multilayer perceptron networks) with high-fidelity computational fluid dynamics LBM acoustic run data, where the inputs to the neural network are fan geometry, while the outputs are fan surface pressure, volumetric velocity field and sound pressure level (SPL) at a designated microphone location. Subsequently, once the machine learning models are trained, unseen fan geometries are fed into the multilayer perceptron models to predict the fan surface pressure, volumetric velocity field and sound pressure levels (SPLs) at a designated microphone location at a fraction of the computational cost. This approach is in line with the growing trend of combining traditional computer-aided engineering (CAE) software with machine learning focused processes, allowing simulation and design engineers to explore more design options in a DOE without incurring excessive computational expenses, fostering innovation and efficiency in fan design processes.
Authored & Presented By Michael Roy (TWI)
AbstractWhen finite element analysis (FEA) is applied to the process of powder bed fusion (PBF), there are numerous numerical options and choices that have to be made to obtain a satisfactory and effective FEA model that can produce the required results. The reported research here looks at the numerical considerations and choices and compares the numerical models and outcomes for each choice. It is important to consider the intended purpose for doing the numerical modelling work right from the onset and to embark on the numerical endeavour accordingly. The fidelity, complexity, size and speed of the numerical model should all be considered and planned carefully.In the reported work, the purpose behind the numerical modelling is presented. This is followed by looking at numerical considerations including the level and location of mesh refinement, numerical control parameters and how they can increase numerical efficiency, time increment sizes and how they correspond to process parameters and mesh refinement. Another worthwhile consideration, which can significantly impact the size of the output file, is to appropriately specify the required set of output parameters and the location where they are needed.A thermal FEA model is generated and meshed for the intended purpose. It is a small cuboid made of an aluminium alloy that has the top surface covered by a thin layer of the same metal in the powder state to allow the deposition in a single pass produced by a narrow laser beam tool path.Two meshing strategies are investigated, varying the level and location of mesh refinement. Comparisons between the two meshing strategies allow conclusions to be made about the most appropriate way to mesh such FEA models. The time increment may also have a direct impact on the accuracy of the temperature history and the numerical convergence of the model. A range of time increments is investigated for that reason. Furthermore, control parameters can be critical for the accuracy of the numerical results. Making them unnecessarily strict can make the model too slow to run without any improvement to the results. The control parameters are investigated for that reason, and recommendations are made that can save substantial run times without compromising the accuracy. Final conclusions are presented on how to improve the efficiency of this type of FEA.
Presented By Erik Glatt (Math2Market)
Authored By Erik Glatt (Math2Market)Maximilian Luczak (Math2Market GmbH)
AbstractAll-solid-state batteries (ASSBs) are considered the most promising technology to significantly enhance the range of electric vehicles. ASSBs are strong candidates for the next generation of batteries for automotive and aviation applications due to their increased energy density and improved safety achieved by eliminating flammable liquid electrolytes. The 3D morphology of the electrode microstructure in ASSBs has greater impact on their performance than in conventional lithium-ion electrodes. The DELFIN collaborative research project presented here seeks to develop an experimentally validated simulation model for the digital material design of ASSBs embedded into existing software packages to enable research of structural designs in ASSB technology. To accomplish this, ASSB electrodes with varied structural compositions are initially synthesized and characterized electrochemically by Justus Liebig University Giessen. Next, RJL Micro & Analytic employs advanced multichannel µCT imaging techniques to capture detailed tomographic data of their microstructures. This 3D imaging data serves as a foundation for calibrating stochastic structure models developed by Ulm University, enabling the generation of additional virtual ASSB microstructures with varied structural characteristics. Numerical performance simulations on these virtual structures establish key microstructure-property relationships and provide guidelines for the optimized design of the electrodes. Furthermore, the integration of electrochemical studies with 3D µCT data aids the creation of a statistical digital twin, which is used to validate the simulation model. A parametric structure generator for ASSB cathodes is also under development for streamlining ASSB research and establishing a robust workflow for continuous ASSB innovation and design. In this work, we focus on the image processing and image segmentation, the creation of the digital twin, and the simulation of charging and discharging of ASSB based on the detailed 3D microstructure. This approach significantly reduces the need for costly and time-consuming experimental trial-and-error methods in the design of materials. As a result, the industrial development of high-performance ASSBs can be accelerated and made more economically competitive. The developed simulation model serves as a tool for original equipment manufacturers (OEMs), battery producers, material suppliers, and academic researchers to better predict and evaluate future battery trends.
AbstractThe importance of the availability and consistency of data in the product development process has increased in recent years. Systems for the management of simulation processes and data (SPDM) are therefore widely used today. The implementation of SPDM systems continues to present companies with major challenges, as they usually have to be implemented in an existing system landscape and affect processes, organisation, systems and required interfaces. Analysing the existing environment and recording specific requirements is therefore essential. Problems that are not solved in the conceptualisation phase will arise during implementation at the latest and can delay the overall project.As part of a customer project with a leading global manufacturer of fuel cells, GNS Systems was able to support the evaluation and implementation of an SPDM system for the company-wide management of cross-disciplinary CAE data. During the conceptualisation phase, the experts recorded the requirements for an SPDM and its use in the business units in topic-specific workshops and market analysis. Functional requirements were derived from these overarching needs, forming the criteria for evaluating the SPDM system. The primary goal of the SPDM system was to reduce the engineers’ workload enabling them to dedicate more time to research and development. Additionally, the SPDM needs to integrate seamlessly into the companies’ IT infrastructure. To achieve this, the most critical requirement was a cloud-native solution, along with authentication implemented via AAD.Another key requirement was the implementation of CFD and system simulation workflows, including the data model and its associated CAE tools, as well as corresponding pre- and post-tools. Beyond these tools, a flexible interface to other data management systems – such as a requirement management system, an ERP system, and a code repository – was essential. Along with the system integration, traceability was identified as a vital requirement. The SPDM system needed to ensure both internal traceability and traceability across systems, while also guaranteeing reproducibility. Finally, functionalities such as data search and acquisition, enrichment of data with metadata, and comparison capabilities for inputs and results were also crucial. These requirements were prioritized and weighted in close collaboration with the engineers.The subsequent market analysis revealed seven relevant providers for effective data management for the entire CAE area. After evaluation based on the criteria from requirements management, the right SPDM solution was found here. The vendor-neutral architecture of the system offers users maximum compatibility and interoperability with other data management systems and third-party CAE tools. The other most important factors when selecting the SPDM solution are fully configurable, the possibility to be customized at any time to meet specific business requirements. A user-friendly dashboard offers engineers many functions for this purpose. The workflow and task management significantly supports the robustness, traceability and efficiency of CAE workflows.In this presentation, GNS Systems will discuss the evaluation process as well as the introduction and go-live of an SPDM system. Based on the know-how from previous customer projects in the field of data management, GNS Systems explains how an evaluation process can be set up and implemented to ensure the success of the project. An important aspect is the involvement of engineers throughout the configuration and process development phase. Close co-operation in the design and migration of CAE workflows into an SPDM promotes acceptance of the system and should therefore take place as early as possible. GNS Systems goes into detail about going live and explains which other processes are important for the development and introduction of an SPDM.
Presented By Navin Bagga (Rescale UK/Europe Office)
Authored By Navin Bagga (Rescale UK/Europe Office)Navin Bagga (Rescale)
AbstractAs engineering and simulation workloads shift to cloud environments, securing sensitive intellectual property (IP) has become a critical priority for organisations. This trend introduces unique challenges, particularly when enabling multiple organisations and cross-continent collaborations. However, advancements in cloud computing and cybersecurity provide robust strategies to protect valuable data while maintaining operational efficiency. This presentation explores these strategies, focusing on real-world use case needs when accessing cloud based HPC and best practices for safeguarding engineering IP in public cloud environments.A key focus will be on specifying and managing compute and storage preferences to achieve optimal performance, cost-efficiency, and security. The session will delve into methods for managing end-user permissions, ensuring compliance and minimising risks. Attendees will also gain insights into the benefits of running simulations and engineering workloads in their preferred cloud regions while maintaining stringent data protection protocols.The presentation will highlight how to ensure Data Security and Compliance of HPC simulation data, when running in the Cloud, a pivotal tool for organisations aiming to retain complete control over their data. Additionally, the discussion will cover strategies for managing data flows and transfers in distributed environments. Attendees will learn how leading enterprises are addressing challenges like data sovereignty and compliance through innovative security measures. The presentation will also explore how these practices enable enterprises to safeguard sensitive engineering workflows, facilitate secure global collaborations, and prepare data for artificial intelligence and machine learning applications.This session is designed for IT leaders, engineers, and cybersecurity professionals seeking actionable insights into cloud-based engineering solutions. By combining cutting-edge technologies and practical examples, this presentation will empower attendees to enhance their cybersecurity frameworks and optimise their cloud engineering operations while ensuring the confidentiality and integrity of their IP
Authored & Presented By Stephanie Bailey-Wood (Dassault Systèmes)
DescriptionGet the latest information from Dassault Systèmes, e.g. product updates, support, tips & tricks, presentations. Details will be announced soon.
Description4a engineeringThe accurate representation of material behavior in finite element simulations (e.g. crash simulations) is crucial for generating reasonable predictions during virtual product development and design optimization. However, in many cases engineers do not have access to high fidelity material data or validated material cards which can be applied right away. 4a’s products and services are exactly designed to close this gap of knowledge and availability and help engineers to generate better predictions.During this talk, specialized testing systems which are precisely designed for the purpose of generating high fidelity material data for simulations will be introduced, together with a unique holistic software solution and modelling approach which highly automates the process of generating even complex material cards based on test data. By using these methods, the time and effort to generate validated material cards can be significantly reduced. Finally, several examples of successful applications and use cases will be shown.Altair EngineeringAI in Design and Engineering – ready to deployHighly engineered products improve every aspect of life, from safety and sustainability to comfort. Whether it’s cars, phones, or everyday appliances, these innovations make life easier, safer and better. Though we rarely think about the design and manufacturing behind them, each product involves detailed designing, planning, developing, and making countless competing decisions. Hundreds, if not thousands, of engineers contribute to their development and some of them will focus on speeding up the design process through virtual product development.Engineers create large amounts of data, so it is only natural that AI is a good fit for virtual product design. There has been significant progress made in this area, but even the most renowned AI companies have focused on anything but virtual product development. As such, impressive AI advancements, such as the latest LLMs, do not have direct applicability to virtual product development, leaving engineers to discover AI themselves.This is why Altair has heavily invested in AI-powered engineering. We believe the future of product development lies in combining physics-based insights with data-driven learning. Using physical test and virtual design data to train AI models accelerates innovation while cutting infrastructure costs shortening the distance from ideas to product.In this session, you will hear about our vision on how to implement AI in Design and Engineering, ready to deploy today.
DescriptionEsteco: Get the latest information from Esteco, e.g. product updates, support, tips & tricks, presentations. Details will be announced soon.Cadfem-Ansys: AI and simulation in digital engineeringIn this 40-minute technical session at NAFEMS, Ansys and CADFEM will jointly explore the integration of Artificial Intelligence (AI) in digital engineering workflows. The presentation will delve into specific AI tools available, focusing on their capabilities to enhance engineering robustness, reduce computational costs, and streamline user accessibility. Attendees will gain insights into methodologies underpinning these AI tools, including machine learning models for predictive analytics and optimization techniques. The session will also cover customer examples demonstrating the practical application of AI in various engineering scenarios, highlighting improvements in accuracy and efficiency. By attending, users will understand how AI-driven digital engineering tools can democratize access to advanced technologies, enabling a wider range of engineers and designers to leverage these powerful resources in their projects.
Authored & Presented By S. Ravi Shankar (Siemens Digital Industries Software)
DescriptionProject presentation „Sound of Science“: A digital twin for concert halls, first developed for the Große Festspielhaus in SalzburgWith this application, event operators can virtually explore how the acoustics change with different hall configurations, in order to select the optimal acoustic scenario. For example, orchestra arrangements on stage can be tested and arranged in advance – before a single note is played in the real world.The technologies used are part of the Siemens Simcenter portfolio of simulation solutions, which has been a core business of the company for over 15 years.Panel discussion on the future application of simulation and artificial intelligenceFriend or Foe? Like hardly any technology before it, Artificial Intelligence is already influencing the way we do product development. Is AI an alternative to simulation or a multiplier? We would like to discuss the opportunitites and limitations with experts from both fields
Authored & Presented By Keb Nande (Hexagon)
DescriptionGet the latest information from Hexagon, e.g. product updates, support, tips & tricks, presentations. Details will be announced soon.
Authored & Presented By Karen Megarbane (Rescale)
DescriptionA novel methodology for simulation of PEM Fuel Cells and its application to optimisation of the flow field designBy: Novid BeheshtiAt Intelligent Energy, we have developed new CAE multi-physics methodologies for PEM fuel cell development.Hydrogen PEM fuel cells are electro-chemical devices, that operate via a catalytic reaction between hydrogen and the oxygen from air, which generates electricity (flow of electrons). Transport of the electric charge, as well as the species, heating two-phase flow of water in air, flowing through the porous media of gas diffusion layers (GDLs) and across catalysts layers. This all creates a complicated framework for numerical simulations. These phenomena are all simulated with a single framework developed by Convergent Science in collaboration with Intelligent Energy, in the CFD code, Converge.To make things more complex, the GDLs are compressed and deformed by the bipolar flow field plates, creating partially embedded and wavy shapes that can have a significant impact on the fluid flow within them as well as in the flow field tracks.The deformation of GDLs is predicted using Altair Optistruct FEA (Finite Element Analysis) solver, and the results are exported into CAD geometry, that is read into the Converge code for the aforementioned CFD simulations.In this presentation, the CAE methodology is explained and some of its results are shared, showcasing how Intelligent Energy’s fuel cells are developed.
DescriptionRecurDyn:Discover RecurDyn: Multiphysics Solutions for Assemblies in MotionThis session will introduce RecurDyn, a leading Multiphysics CAE software specializing in Multi Flexible Body Dynamics (MFBD) and widely used to analyze assemblies in motion. RecurDyn excels in nonlinear FEA and contact analysis, enabling highly accurate simulations of complex systems.Its advanced multiphysics capabilities include coupled CFD – MBD (Multi-Body Dynamics) for simulating waterway driving, washing machine dynamics, oil churning, lubrication, and heat transfer between mechanical components and fluids. RecurDyn also computes heat transfer and thermal deformation caused by friction and contact. In addition, it can predict the vibration in the housing body caused by a motor and gear train.Real-world examples of MFBD-based multiphysics simulations will be presented, demonstrating how RecurDyn addresses challenges across diverse fields. The session will also highlight automation opportunities to simplify engineering workflows using Python scripting. Join us to discover how RecurDyn empowers engineers worldwide to innovate and optimize designs through cutting-edge simulation technology.Get the latest information from RecurDyn and Viridian, e.g. product updates, support, tips & tricks, presentations. Details will be announced soon.Viridian:Get the latest information from Viridian, e.g. product updates, support, tips & tricks, presentations. Details will be announced soon.
DescriptionGet the latest information from Quarnot and "tbc", e.g. product updates, support, tips & tricks, presentations. Details will be announced soon.
15:00
AbstractIn recent years, simulation has become increasingly important to industrial decision-making. As a result, expectations for simulation credibility have risen significantly. To achieve this goal, appropriate Verification, Validation & Uncertainty Quantification (VVUQ) processes are essential, with validation playing a central role. However, successful implementation of validation faces several challenges. One of the primary issues is the availability of dedicated high-quality experiments that serve as validation referents, in line with recommendations of existing standards. Therefore, a broader range of validation referents needs to be considered in practice, leading to a wide spectrum of validation approaches with varying levels of rigour and credibility. In light of these challenges, the NAFEMS Simulation Governance & Management Working Group (SGMWG) has developed a book titled “Guidelines for Validation of Engineering Simulations”. The aim is to provide guidance for industry practitioners, helping them overcome these obstacles and enhance the credibility of their simulation results. It specifically focuses on the validation of physics-based simulation models. NAFEMS has adopted the definition of ISO 9000, and the ISO 9001:2015, “Quality management systems, Requirements” [1], which is the basis of the NAFEMS “Engineering Simulation Quality Management Standard” (ESQMS) [2]. Other organizations, notably the American Society of Mechanical Engineers (ASME) and their ASME VVUQ standards [3,4], require validation to be based on empirical evidence, i.e. physical experiments. The ISO/NAFEMS definition embeds the more stringent ASME definition as a subset, but allows for a wider range of validation referents, so that the same processes can be applied on applications with varying criticality and credibility requirements.The presentation tackles the above validation challenges and aims to explain the main features of the book. Firstly, it formally introduces the concept of a “spectrum of validation methods”. The methods span the range from the strict definition of validation used in the ASME VVUQ standards, through to weaker validation approaches, including those supported by expert review. The introduction of the spectrum of validation methods is purposely high level and may need appropriate tailoring for application to specific industry applications.The second main contribution of the book lies in the formal definition of validation rigour attributes that significantly impact the credibility of simulations. It is recommended to incorporate these rigour attributes during the specification and planning of validation activities.References [1] ISO 9000:2015 & ISO 9001:2015, Quality Management System [2] NAFEMS, ESQMS:01 Engineering Simulation Quality Management Standard, Jonathan M. Smith NAFEMS, 2020[3] ASME, Verification, Validation and Uncertainty Quantification Terminology in Computational Modeling and Simulation, ASME VVUQ 1-2022, New York, NY, 2022[4] ASME, Standard for Verification and Validation in Computational Solid Mechanics, “American Society of Mechanical Engineers, ASME V&V 10, New York, NY, 2019
Presented By Tim Newman (National Composites Centre)
Authored By Tim Newman (National Composites Centre)Jakub Kucera (National Composites Centre) Jon Taylor (National Composites Centre)
AbstractWhilst computers can store, access, render and manipulate Computer Aided Design (CAD) models they have no understanding of what is being represented. Many processing steps in CAD are still completed manually. This is because a machine lacks the engineer’s tacit understanding of part features and what they are for. These manual tasks can be a significant proportion of the cost in design iteration. Enabling machine feature identification can enable the automation of downstream processes like design for manufacture. Current research focuses on lower-level features, specifically the process used to make the feature, e.g. punch or subtraction. It does not examine how those features combine into a useful unit in the design. The purpose of this research is to demonstrate identification of high-level features using graph neural networks (GNN). CAD designs are generated automatically, forming a range of complex parts, with a focus on the structures seen in aerospace composites. Holes, fillets, and radii are chosen as selected features and investigated. These parts were labelled and converted into step format. After further processing, they were converted into GraphML format, which is compatible with Pytorch Geometric. The features added for nodes include face orientation, count of edge type (per the step definition), the basis of the faces, the total degree and for edges; the specific curve type of that edge. The graphs are tagged at node level with binary classes, i.e hole or not hole. These classes are imbalanced with a bias away from the class of interest. This is more prevalent in smaller features as these have fewer nodes that make them up.Models were trained to maximise against Area Under Curve Receiver Operating Characteristic (AUC RoC) and down selected to a Graph attention network architecture (GAT) over GNNs and Graph Sage. Initial results showed that the models were learning relative to the origin. New samples were generated at random positions compared to the point (0, 0, 0). The models are binary classifiers as part of a related project. The models demonstrate a high level of capability (AUC ROC ~ 99%) against the generated synthetic data. We also hand labelled a set of real CAD models for the hole feature. There was significant degradation in performance against the real CAD models (AUC ~91%). Potential factors focus on improving the variability of the generator to generate more partially over-lapping features. Work identifying additional features would also be beneficial overall.This paper demonstrates:• A new mechanism for generating part models with composite features present in aerostructures• GNN models show a great deal of promise on synthetic data with high accuracy and AUC RoC• Demonstration of performance degradation against real data
Presented By Oliver Found (TWI North East)
Authored By Oliver Found (TWI North East)Madie Allen (TWI Limited) Oliver Found (TWI Limited)
AbstractSurrogate models may be employed where conventional finite element analysis (FEA) will consume excessive analyst and/or computing time and capacity. Surrogate data driven models, applied with artificial intelligence (AI) and machine learning (ML) may use approximate mathematic models based on data (rather than being based on physics) which in theory require greatly reduced computing and human effort. In a finite element analysis context to employ a surrogate model, there is the need to firstly train the machine learning model with results from existing finite element analysis, then to predict results using the machine learning approach, and finally to map those results to the model to validate and further teach the tool. This approach has the potential to generate results much faster than traditional physics-based FEA models, but understanding of how and when it is an appropriate and viable approach, as an alternative to conventional finite element analysis, needs to be developed; this is an aim of the reported work. To advance the state-of-the-art of engineering surrogate modelling, the work reported here includes an assessment of both commercial and open source software solutions, effectively as a benchmarking, for addressing a structural integrity finite element analysis problem; based on predicting the attributes of a cracked pipe. The outputs are compared against best conventional FEA physics based modelling practice. The provision of sufficient training data is a concern in this type of work, and the approach taken to generate training and validation data and use of this will be discussed. This work begins to answer the fundamental questions about the future of finite element modelling and the role of finite element Analysts in engineering problem solving, the confidence which can be placed on such results, and allows comments to be made regarding future application of this surrogate approach in an engineering structural integrity assessment context.
BiographyGraduated with a PhD from the University of Sheffield in May 2024. PhD topic was researching magnetic architectures in electric motors for use in the automotive industry. Joined as a simulation and modelling engineer at TWI October 2023, where my work ranges from structural, thermo-mechanical, electromagnetic, CFD and machine learning modelling.
Authored & Presented By Bernd Fachbach (Fachbach-Consulting e.U.)
AbstractVirtualization of engineering has the potential to address strategic goals like reduction of time-to-market, managing an increasing product complexity and variety, or raising the flexibility to react on the market. Companies can realize it at various levels. The lowest level provides selective support for product development without substituting any physical test – hardware-based development is accompanied by a small number of simulation methods. This level primarily helps to raise the quality of a product but has minor influence on time to market. The highest level enables virtual release of a product including relevant certification without any physical prototype. This is not only the level that enables a significant shortening of development time. It is also the best base for any digital twin operation during product lifecycle.Any virtualization level between the described extrema must fit to a company individually. But there are factors that have an influence on the effectiveness regardless of which aimed level. Simulation methods enable the identification of the behavior or characteristic of products, materials, or process virtually – in the best case without physical prototypes or physical testing. Mostly, companies start with investigating and implementing simulation methods – either bought or licensed from expert tool providers or developed by themselves. These methods are crucial requisites - but they are just the visible peak of the "virtualization iceberg". It is evident that even powerful high-quality tools need validated input data and usage ability to provide high-end prognosis quality. To set up a simulation model with realistic behavior, simulation engineers need sufficient system understanding. Finally, the provided simulation results must be verified and confirmed by adequate testing, analysis, and comparison.To guarantee continuous quality, the whole simulation procedure must be confirmed and documented. This also must include the relevant usage limitations. The required level of detail is dependent on the level of experience of the users, who will apply the validated procedure within a product development.But there are several further factors that unleash the potential of virtualization and lead to an efficient virtualized product development: process design, dedicated organization, efficient data handling, numerical optimization and robustness analysis, procedure automation, tangibility, and trust, as well as data-driven design methods. There are still many companies who cannot imagine or do not dare to follow the path towards virtualization consequently.Once the virtualization level is rising and numerical support becomes significant, it is not sufficient any more to stick to the former hardware-related development process. Time periods, number of gateways and milestones as well as synchronization points for functions and data must be defined adequately. The digital prototype must become the leading element for development – together with the implementation of relevant responsibilities in the organization. Synchronization and integration of the high number of numerical disciplines and methods – e.g., by data provision or by system simulation - is becoming crucial for cross-disciplinary optimization.The real benefit from virtualization is appearing, when parameter variation helps to explore the solution space in an early design phase or the robustness of a solution, when MBSE models enable an highly automated validation of new functions or product variants, when tangibility of results allow direct exploration of data for decision makers, or when AI-based algorithms assist in the product design or smart product functions. The presentation will give a comprehensive experienced-based picture of potentials and dependencies as well as of requisites and recommendations for a successful way towards engineering virtualization.
Presented By Fabiola Cavaliere (SEAT)
Authored By Fabiola Cavaliere (SEAT)Gabriel Curtosi (SEAT S.A.)
AbstractThe automotive design process is inherently complex and time-consuming, involving numerous specialists who must collaborate closely to predict and optimize every aspect of a vehicle’s performance. Engineers face the challenge of meeting strict targets and regulations while adapting to rapidly changing market demands, all without sacrificing quality or increasing development costs. Traditional approaches often rely on repeated trial-and-error, focusing on only a small subset of all possible design configurations. This not only slows down the process but also risks overlooking potentially better solutions that could enhance product performance or reduce overall costs.To address these challenges, integrating Machine Learning (ML) into Computer-Aided Engineering (CAE) has emerged as a promising strategy. By leveraging ML-based methods, engineers can quickly evaluate a wide range of design variables and performance criteria without running excessively detailed and time-consuming simulations. Instead of limiting exploration to a handful of configurations, they can now consider a much broader portion of the design space, identifying optimal choices more efficiently. This early insight reduces the likelihood of late-stage design changes, cutting down on both expenses and delays.SEAT S.A. has developed ARCO (Advanced Real-time Car-body Optimization) as a physics-informed ML technique specifically aimed at improving the noise and vibration (NVH) performance of car-body structures. NVH behavior is highly sensitive to changes in global static and dynamic stiffness, making it a critical aspect of vehicle design. By applying the Proper Generalized Decomposition (PGD) method, ARCO can efficiently handle parametric analyses of static and dynamic behaviors. This approach enables a single offline computation to capture a wide range of design possibilities, effectively reducing model complexity and highlighting key structural patterns.Previous studies have already demonstrated the potential of ARCO in addressing NVH challenges. The next step involves adapting ARCO to handle full-scale industrial models and ensuring seamless integration with commercial engineering tools. By streamlining the design process and making advanced ML approaches more accessible, ARCO represents a significant step forward in achieving improved, efficient, and reliable automotive development.
Presented By Moncef Salmi (Hexagon)
Authored By Moncef Salmi (Hexagon)Vinay Madhavan (Hexagon MI)
AbstractRecycled short fiber reinforced polymers (SFRPs) offer promising solutions for sustainable design in industries like automotive, where environmental policies are increasingly enforcing recycled material quotas. However, the inherent variability in material and process performance for recycled SFRPs presents significant design challenges. This uncertainty can lead to either overdesign—using excessive material to offset unknowns—or underdesign, which compromises the product’s reliability. Both approaches come with trade-offs: overdesign increases costs and environmental impact, while underdesign risks the integrity of the final product.To address these issues, a machine learning-based robust design approach for recycled SFRPs has been developed. This approach leverages machine learning to replace computationally costly structural finite element models with reduced-order models. These efficient models enable thousands of structural analyses to be performed across different uncertainty scenarios at a fraction of the cost of high-fidelity simulations. Using a Monte Carlo approach, this method incorporates variability from factors like local fiber orientation (due to process-related uncertainties), material performance fluctuations, and conditional factors. This machine learning integration ensures a comprehensive reliability assessment while minimizing computational costs, making robust reliability evaluation both feasible and practical.The machine learning-based robust design approach enables engineers to quantify uncertainties and accurately assess structural reliability, allowing for optimized material selection without the risk of overdesign or underdesign. Through this methodology, engineers can tailor the amount and type of recycled SFRPs used in specific applications, achieving both sustainability and efficiency.The paper will showcase the application of this method to a structural part, quantifying its reliability by accounting for different sources of uncertainty in both material and process parameters. This case study demonstrates how the machine learning-based robust design approach provides a clear understanding of how these uncertainties affect component reliability, offering insights into achieving optimal material and design choices without overdesign or underdesign.This advanced methodology enables engineers to balance sustainability and reliability for recycled SFRP applications, setting a new standard for robust and efficient design in industries where environmental goals are increasingly critical.
Presented By Hendrik Dr. Schafstall (Detroit Engineered Products)
Authored By Hendrik Dr. Schafstall (Detroit Engineered Products)Kavin Paul (Detroit Engineered Products)
AbstractThe cooling system of an automobile requires heavy consideration in the design phase, particularly for electric vehicles which have more heat sources than a conventional internal combustion engine. It is in the designer’s best interests to create a cooling system that maintains operating temperatures within optimal limits while consuming less power for better mileage. Furthermore, there are hard limits for EV battery pack temperature that must be maintained to avoid thermal runaway. With the advent of many thermal 1D-CFD commercial simulation software, it has been made possible to simulate these cooling systems with a high degree of accuracy, allowing for several design iterations to be tested and analysed without requiring costly and time-consuming physical testing. These software packages, however, still fall short of creating optimal design iterations of the cooling circuits. This can mainly be attributed to limited optimization algorithms available in the software. Rerunning simulations for every possible design is also computationally expensive and time-consuming. Hence, it is preferred to have an optimization process that can provide a number of potential designs that meet the required targets, in a short period of time.In this work, optimization of an EV cooling system to achieve the reduction of the maximum e-motor temperature by 5°C is presented. A 1-D model of the existing cooling system is developed in GT-Suite to generate the Design of Experiments (DOE) matrix for the selected optimization parameters, such as the coolant flow rate, heat exchanger dimensions and so on. Subsequently, an ML-based approach for design optimization is applied using SIMULIA ISIGHT. Using the DOE matrix with parameters and outputs, a Radial Basis Function (RBF) neural network approximation of the cooling circuit is created. RBF approximations are characterized by reasonably fast training and compact networks, and they are useful in approximating a wide range of nonlinear spaces. The tool also has features to analyse the sensitivity of parameters, allowing the selection of only the most significant parameters for design optimization.The non-dominated sorting genetic algorithm (NSGA2) is used to identify optimal solutions. This multi-objective algorithm can identify multiple local minima for the given objectives and constraints, which results in three optimal design iterations capable of achieving the 5°C temperature reduction with different parameter values, for example, the lowest coolant flow rate or the smallest heat exchanger size, etc. The overall ML-based optimization process is about 30 times faster than the conventional optimization process and can significantly speed up the design phase. The created designs are retested in the original 1D-CFD software, showing high accuracy and low error of 1% in the motor temperature, thus validating the optimization results.
Presented By Matthias Grundner (Denso Automotive Deutschland)
Authored By Matthias Grundner (Denso Automotive Deutschland)Christoph Starke (Siemens Digital Industries Software)
AbstractDENSO is a leading, global automotive supplier with a thermal development center near Munich for the North-European market. The development process is heavily based on simulation and therefore, in 2017, the HVAC simulation team started to fully automate their CFD workflow.The process automation was developed in-house. The input is tessellated sub-parts of the HVAC components. Pre-processing, solving and post-processing are controlled by a wizard-type GUI and performed using STAR-CCM+. This approach increased quality and efficiency of the CFD workflow significantly. The digital simulation methods become the leading tools in the product development area and therefore the number of total simulation activities were significantly increased through the years. Faced with this high number of simulation data, the traditional handling was not satisfying internal and external requirements anymore. Data transfer from design to simulation was not fully traceable. It required many manual working steps which were prone to error. Therefore, linking the CAD design and CFD simulation data in the existing Siemens Teamcenter PLM system was the consistent next step for the development process. Teamcenter Simulation provides an open interface to connect any kind of simulation tools, including 3rd party products and in-house developments. This is important as the existing automation should not be replaced. The existing CFD workflow is Linux based and makes use of extended JAVA script automation. It was required to connect the tool nearly untouched during the first Teamcenter Simulation introduction phase. An important difference of the new process is that the simplified simulation geometries for the airflow volume are now created as associated CAD parts instead of isolated geometries as before. The data transfer to all CAE tools is now managed by a Teamcenter workflow which also includes a feedback loop for corrections and final freeze status release. CAE BOMs for different simulation setups can be created by structure mapping. As before, the tool ANSA is used to then create the surface meshes from the simulation geometries, followed by the CFD automated simulation setup and execution in STAR-CCM+. But now, those steps are fully integrated into the Teamcenter workflow to establish the Digital Thread from end to end.This traceability allows the rising number of simulations to be efficiently performed while avoiding expensive errors due to data inconsistencies. The base implementation is successfully finished and future transformation steps will be defined within first productive phase of Teamcenter Simulation. The vision of DENSO is to integrate the total CFD simulation workflow as well as other CAE workflows to Teamcenter. Unclear is from today’s point of view if the JAVA integration will stay alive or be replaced by a newer tool integration within the next years.
Presented By Adi Adumitroaie (Porsche eBike Performance)
Authored By Adi Adumitroaie (Porsche eBike Performance)John Sandro Rivas (Porsche eBike Performance GmbH)
AbstractFinite Element Method/Analysis (FEM/FEA) is a cornerstone of engineering simulation, enabling precise modeling of complex physical systems (and obtaining accurate results for complex engineering tasks). The recent developments in FEM software and the supporting hardware technologies have set the industrial product development on the path of a new paradigm shift, from simulation supported design to simulation driven design, where FEA is involved from the early stage and supports all stages of product development: from component to system level, from single to multi-physics and scales, and from pre-design to testing and certification (virtual testing); allowing thus for reduced number of design iterations and physical testing, which translates into reduced development costs and time to market.However, as the size and complexity of simulations grow, so does the computational demand, and this is why all FEM commercial software development companies have included robust multi-processor parallel computing capabilities in their portfolio, allowing thus the user to significantly accelerate analyses and tackle larger and more complex problems, closer to the real industrial designs, manufacturing conditions and operating environments. This article presents a benchmark study related to maximizing the computation speed and scalability using parallel processing in one of the most known FEM commercial software packages, namely Abaqus 2024 edition. The selected problem is representative to current simulation challenges in industry companies: multi-body and multi-material assembly, multiple contact interfaces, large deformations and non-linear material behavior. The assessment is run on a hardware configuration commonly used by industry companies, namely multi-CPU individual work station equipped with GPGPU. Quantitative metrics in terms of both hardware (number of CPU, GPU) and licensing (tokens) costs are presented, as applicable to both the Implicit and Explicit solvers in Abaqus. Similar problems setups of different sizes (number of degrees of freedom) are evaluated, in order to assess the scalability of the multi-processor parallel computing capabilities. The study offers guidelines for hardware selection, configurations, and best practices for multi-processor parallel computing in industrial environments.
15:20
Presented By Gregory Westwater (Fisher Controls International LLC)
Authored By Gregory Westwater (Fisher Controls International LLC)Steve Howell (Abercus)
AbstractVVUQ is the collective acronym for verification, validation and uncertainty quantification. In order to gain confidence in engineering simulation, it is necessary to undertake VVUQ activities, but it is also important that these activities are appropriate for the level of risk associated with the engineering application and the maturity of the simulation process. When the consequences of failure are high, such as in the aerospace or medical industries, or if simulation work is undertaken on a new problem or application that is not yet well understood or for which prior simulation work is not available, a significant VVUQ effort may be necessary and the time and financial investment for this effort should not be underestimated. The mere cost or inconvenience does not excuse simulation users from the responsibility of providing credible and reliable simulations. For lower risk applications, however, or where the problem is already well understood through previous simulation, maybe a reduced VVUQ effort can be justified. The level of VVUQ effort that may be appropriate is considered in this paper. In particular, three aspects are considered: 1. The importance of the conceptual model, since this forms the basis of both the simulation model and the referent used for the validation of the simulation model. 2. Simulation verification and uncertainty quantification, and specifically the maturity of the simulation process. 3. The level of validation rigor in terms of the recently published NAFEMS Guidelines for the Validation of Engineering Simulation (GVES). The importance of properly documenting each of these three aspects is discussed.
Presented By Daniela Steffes-lai (Fraunhofer SCAI)
Authored By Daniela Steffes-lai (Fraunhofer SCAI)Jochen Garcke (Fraunhofer SCAI) Rodrigo Iza-Teran (Fraunhofer SCAI) Tom Niklas Klein (Fraunhofer SCAI) Mandar Pathare (Fraunhofer SCAI) G. N. Devdikar (Stellantis-Opel)
AbstractThe CAE field is facing enormous innovative opportunities offered by the rapid development of artificial intelligence (AI), particularly in text and image processing, as long as the unique challenges of complex CAE data are addressed. CAE engineers foresee AI as a potential tool not only for data management but also for enhancing complex engineering design processes across development stages. This presentation examines how AI could transform CAE with novel data representation and exploration techniques. We focus on AI-driven methods for visualizing and exploring model evolution and linking it to resulting effects, enabling inference and intelligent data retrieval. Implementing AI effectively in CAE requires transparent learning algorithms capable of handling limited but complex data, like mesh functions and geometries. Off-the-shelf AI models, such as large language models (LLMs), are not directly applicable. Therefore, we discuss emerging methods tailored to CAE challenges, highlighting AI's potential in engineering product development. First, we present the latest extensions of our Laplace-Beltrami shape feature approach (LBSFA) implemented in our software tool SimExplore to investigate similarities or exceptions in deformations and mesh functions, called events, in many simulations. The combined representation of model changes in geometry and input parameters, and results allows to set the model changes into context with the insights from simulation results analysis. This provides a fundamental basis for further post processing of results, like sensitivity and correlation analysis, up to envisioned learning of relationships. However, the engineer still needs to select mesh functions and components of interest out of a ranked list of components that are influenced by the model changes. To address this, we have investigated some more sophisticated filter and grouping methods to ease the decision on which component to look first. On the one hand, a method that filters out all parts without deformations but just influenced due to rigid body motion has proven to be very effective. On the other hand, concentrating on jumps in a development tree identifies the measures with the highest impact. In this sense, a jump means that the corresponding result jumps from one behavioral mode to another.Second, we explore how LLMs can be leveraged for CAE data. We present a proof of concept of using a Retrieval-Augmented Generation (RAG) method together with our structured data representation to enable easy exploration of relationships within the data of a development project. The RAG approach uses a knowledge base outside the training database and thus offers an extension or specialization in domain-specific knowledge. With this approach we could reach an automatic knowledge driven preselection of important components. Further on, fast metamodels can be generated based on this selection which are an efficient alternative for data-intensive convolutional neural networks (CNNs). Finally, we outline the different parts needed for an AI support system which learns measure-effect relationships. This contribution illustrates how far our AI driven solutions specialized for CAE applications can already reach to support the complex simulation data analysis tasks giving the engineers more resources for interpretation of results and decision making.
BiographyDr. Daniela Steffes-lai is a senior researcher at Fraunhofer SCAI, department of numerical data-driven prediction, Germany. She graduated from University of Cologne, Germany with a PhD in Mathematics. Her current research is focused on the development of data analysis methods for CAE data. She has many years of experience in leading projects in the field of data analysis for engineering applications. Furthermore, she is the main developer of the SCAI Software product SimCompare which provides a comparison of two FE simulation results.
Presented By Ceyhun Sahin (Noesis Solutions)
Authored By Ceyhun Sahin (Noesis Solutions)Dr. Riccardo Lombardi (Noesis Solutions)
AbstractTracing particles and analyzing their settling and finally sedimentation has a variety of applications from geology to chemistry. Numerical analysis of particle tracing involves complex calculations of heterogenous flow dynamics. Although literature is rich with empirical or numerical solutions, the nature of the problem dictates customized models. In this study, a Reduced Order Model (ROM) is used to simulate the global behavior of the particles, as a function of time, during the settling and sedimentation phases, in response to different geometries and properties of the particles. The aim in doing that is also to unify several models in a given application into a more general model. By focusing on average metrics, such as the surface defined by the accumulating particles, it is possible to reduce the impact of the intrinsically non-linear behavior of the particles. This allows a compact model which can reproduce the critical aspects of the simulation, without the need for particle-to-particle tracking.As proof of concept, we focus on a collection of spherical particles relocating from a higher container to a lower one through an opening whose location and width are parametrized. A ROM is trained to mimic the simulations for accumulated particles free falling into a lower container. The requirements from such a ROM are to predict:i. the sedimentation of the particles still in the upper container,ii. the particle settling in the lower container, iii. the number of particles still in flow,iv. the altitude of particles still in flow, andv. velocity of the flowing particles. To handle the problem independent of individual particle trajectories, the data is pre-processed to extract the initial and instantaneous distributions of the particles. Such a pre-processing will allow to combine simulations with varying number of particles, impurities of particle shapes, and materials to be homogenized in one ROM. In this stage of the study, the initial sedimentation distribution of the particles in the upper container due to different shapes of the container and the opening location and size are the independent variables. The particles are processed into three distinct groups: still in the upper container, flowing, settled in the lower container. For those in the containers, the boundaries and geometry of the volume of space they occupy are identified, parametrized, and modelled. The resolution in the pre-processing of the particle distributions can be fine-tuned according to the accuracy requirements from the application. The model has been tested using designs not included in the training dataset. The predictions showed less than 10% of errors for the distribution of particles still in the upper container. The reliability of the particle settling predictions in the lower container is initially affected by the quasi-random distribution of the first particles and improves as the particles’ accumulation becomes the dominating factor in the sedimentation distribution. That said, in transient cases the prediction quality increases. The overall numbers of particles that are still flowing at various stages are predicted successfully. Flowing particles can also be divided height-wise into subgroups. The approach has been proven to satisfy the initial requirements, by predicting the instantaneous sedimentation in seconds. Future work would include training the model with simulations including particles of different shapes, weight, and dimensionality, to assess both the model scalability to higher number of design parameters and its invariability to the number of particles. Such an extended model will open the doors to unify the custom numerical solutions in a given application dealing with particle flows.
Presented By Joao Gregorio (National Physical Laboratory)
Authored By Joao Gregorio (National Physical Laboratory)Moulham Alsuleman (National Physical Laboratory) Hannah Strassburg (National Physical Laboratory)
AbstractRemanufacturing worn-out industrial parts, prolonging their lifecycle, can decrease greenhouse emissions from manufacturer processes by up to 45% of their totals and also has the benefit of reducing production costs. Stricter environmental policies and regulations, focusing on a net-zero economy, increase the attractiveness of these processes to manufacturers. However, the validation of remanufactured parts relies mostly on physical testing, which counteracts some of the benefits from prolonging their lifecycle. This makes the shift from physical experiments to digital models and simulations, including systems such as digital twins, an area of focus. The ReMake testbed, developed by the National Manufacturing Institute Scotland (NMIS), aims to replace physical testing for validating reconditioned industrial parts.The main blocker to the adoption of Digital Verification and Validation (DVV) frameworks remains the credibility of models and simulations. Low credibility in model and simulation outputs can hinder decision-making due to the lack of stakeholder confidence in these outputs. This is particularly prominent in high-risk applications that require the credibility of results to be more robust to support a decision. Under the auspices of the ReMake testbed, the National Physical Laboratory (NPL) Data Science group has developed a data model based on a NASA Standard for Models and Simulations capable of enhancing the credibility of the data used by the ReMake testbed.NPL’s Data Science group has also carried work to link risk events related to the use of remanufactured parts—such as the risk level and impact associated with critical failure—to the trustworthiness that models and simulations need to achieve to aid decision-making. The baseline credibility that a model needs for use in DVV is set by a risk assessment, which is often a time-consuming and human-centric activity. Risk factors were mapped into a risk data model to streamline the risk assessment generation process. This reduces human bias and creates a traceable pathway to the risk data for validation and audit purposes.The ReMake model focuses on data traceability and provenance, and the risk model maps risk factors to credibility requirements. In particular, the risk model establishes a solid foundation for objectively capturing risk aspects by leveraging FAIR—Findable, Accessible, Interoperable, Reusable—data, including historic data from previous DVV activities. We propose a cohesive DVV framework that combines risk assessment and data traceability to enhance model credibility and automate decision-making for all engineering disciplines. This framework would have the ability to enhance simulation credibility while also providing the infrastructure to automate decision-making processes.This integration optimises good simulation data management practices for DVV by ensuring that data credibility remains the key factor for building trust in the process. This increase in demonstrably trustworthy model and simulation results yields a higher number of decisions made without recourse to physical testing, bringing benefits to stakeholders, such as manufacturers or policymakers, in terms of efficiency and sustainability. In the context of the ReMake testbed, this would equate to more remanufactured parts being safely deployed based on the DVV framework, aligning with the larger goals of supply chain efficiency, and promoting net zero.This framework could act as a baseline for industries seeking to enhance their sustainability efforts through digital transformation. By implementing DVV-focused best practices, with a focus on the role of risk for DVV, organisations can set minimum credibility standards for their models and simulations. This approach not only enhances decision-making but also promotes greater trust in digital methodologies.
BiographyDr. João Gregório received his PhD in Chemical and Process Engineering from the University of Strathclyde in 2023 by investigating polymerisation network structures near solid surfaces using molecular dynamics. Prior to that, he worked as Data Scientist at SpaceLayer Technologies (2017–2018) developing machine learning models for forecasting air pollution. In 2021, he moved to the Data Science Department of the National Physical Laboratory as a Higher Scientist. His current research topics mainly include data quality, digital manufacturing, and digitalisation.
Presented By Mathieu SARRAZIN (Siemens Industry Software NV)
Authored By Mathieu SARRAZIN (Siemens Industry Software NV)Claudio Colangeli (Siemens Digital Industries Software) Jan Debille (Siemens Digital Industries Software) Elena Daniele (Siemens Digital Industries Software)
AbstractThis paper explores the mechanical and NVH (noise, vibration, and harshness) challenges in developing high-performance electric drive systems, with a focus on the design, simulation, and physical testing of transmission housing. As a key component in modern electric vehicles (EVs), the transmission housing is essential for meeting stringent NVH standards while ensuring structural integrity under demanding dynamic conditions.A key challenge in transmission housing design is ensuring it can withstand high forces, particularly during sport or deceleration modes when the e-motor generates significant torque. Simultaneously, minimizing NVH emissions is crucial to meet the performance expectations of electric vehicles. A specific NVH issue addressed in this study is gear whining, a high-pitched noise generated by the meshing of gears under certain driving conditions. This noise, often perceived as intrusive, is particularly noticeable during specific load conditions and needs to be mitigated to improve the overall driving experience.The engineering process begins by translating vehicle-level NVH targets into specific component-level requirements, enabling a focused approach to solving individual challenges. High-fidelity digital twins model the mechanical and NVH characteristics of the transmission housing, allowing for early-stage testing and optimization. This reduces reliance on costly prototypes and accelerates development. However, physical testing remains essential to validate digital models and ensure their accuracy. Comprehensive NVH testing campaigns at both the component and system levels help calibrate the models and confirm the design’s effectiveness in mitigating noise, including gear whining.A case study on the e-drive gearbox housing demonstrates the effectiveness of this integrated approach. Experimental data from pre-test and validation phases refine the digital twins, ensuring accurate predictions of vibro-acoustic and structural behavior. The iterative process ensures the transmission housing meets NVH, durability, and mechanical performance targets while adhering to budget and timeline constraints.By combining advanced simulation techniques, digital twin technology, and rigorous physical testing, this approach streamlines the development process, minimizes risks, and reduces costs. The results show significant reductions in noise, including the reduction of gear whining, along with improved structural performance.In conclusion, this paper highlights the importance of integrating testing and simulation in the development of electric drive systems. The combined use of virtual and physical testing helps meet the demanding NVH standards of the electrified mobility market, ensuring optimal performance.
BiographyAs a Senior Technology Go-to-Market Specialist at Siemens, Mathieu is passionate about driving go-to-market strategies for cutting-edge technologies. His focus is on translating complex technical solutions into customer-centric messaging and positioning, ensuring Siemens delivers significant value through impactful product strategies and clear communication. With over 10 years of experience in Research and Development (R&D), Mathieu has previously served as a Research Engineering, Technology, and Innovation Manager. In this role, he led teams and managed large-scale EU and national projects in fields such as Mechatronics, Electric and Autonomous Vehicles, and Noise, Vibration, and Harshness (NVH). These experiences have equipped him with a comprehensive understanding of the technology landscape and the ability to align products with market needs. As a member of the European Automotive Research Partners Association (EARPA), Mathieu has previously represented Siemens as one of the key technical experts, engaging with major stakeholders and staying at the forefront of automotive innovation and research. Mathieu holds Master’s degrees in Mechanical Engineering from the University of Leuven and in Electromechanical & Electrical Engineering from the University of Gent, Campus Kortrijk. He is excited to continue his journey at Siemens, contributing to the mission of advancing technology and innovation, and collaborating with teams to develop solutions that will shape the future of the industry.
Presented By Minh Hoang Nguyen (Digital Blue)
Authored By Minh Hoang Nguyen (Digital Blue)Royan Dmello (University of Michigan)Anthony Waas (Digital Blue)
AbstractThe performance of composite materials with thermoset resin is influenced by the processing conditions because of inherent residual stresses developed during cure. Thus, when analyzing composites, the effect of the cure process must be done in conjunction to the mechanical load analysis. The source of residual stresses are non-mechanical strains due to chemical shrinkage and thermal strains. Moreover, the non-uniform temperature distribution through the thickness also affects the stress development within the composite, especially when the composite is thick.In this work, we will present a finite element framework to perform cure analysis of fiber-reinforced composites and laminated structures. More generally, the cure-hardening instantaneous linear elasticity (CHILE) constitutive model is used to model the modulus evolution in the matrix during cure. Specifically, we will discuss a machine learning implementation using an artificial neural network (ANN) model, which serves as input to the CHILE model, to model the cure evolution of the matrix material. The ANN model is implemented inside our user subroutines. The present cure model takes into account the cure kinetics as well as the heat generation due to the chemical reaction.As an effective yet simple strategy, we will use micromechanics models to inform the effective thermal properties to capture the overall effects of cure process. In the micromechanics model itself, the fibers are modeled with a linear-elastic transverse isotropic material behavior, whereas the matrix behavior is captured using the CHILE model. The FE framework is used to capture the curvature of asymmetrical cross ply panels, with two different layups and sizes. The results are compared with experimentally measured panel curvatures.The integrated analysis framework enables a high-fidelity progressive failure analysis of the composite laminate following a virtual curing step. Thus, the impact of the cure process on the mechanical performance of the laminate, for a given loading scenario, can be assessed. The ongoing work is devoted to expanding the capabilities of our modeling framework to include thermoplastic resin systems and other architectures such as textile composites.
Authored & Presented By WOOJIN SHIN (FunctionBay)
AbstractThe automotive industry's shift from internal combustion engines to electric vehicles (EVs) constitutes a critical move toward addressing environmental concerns and enhancing energy sustainability. This transition necessitates significant advancements in thermal management systems for EV drivetrains, especially under high-speed operational conditions. The substantial heat generated by electric motors and their mechanical components, such as gearboxes, demands sophisticated cooling strategies to ensure both efficiency and reliability.This paper presents a comprehensive investigation into the thermal management of EV drivetrains, utilizing an integrated approach that combines Computational Fluid Dynamics (CFD) and Multibody Dynamics (MBD). This dual-methodology enables an in-depth analysis of heat flow and fluid dynamics, while simultaneously considering the mechanical interactions that impact the thermal characteristics of the drivetrain. CFD serves to provide detailed insights into thermal distribution and flow patterns, identifying critical bottlenecks in heat dissipation. In contrast, MBD captures the dynamic mechanical behaviors, offering a holistic understanding of the system’s overall thermal performance.Our research identifies key areas where cooling system designs can be optimized, focusing on improving heat dissipation efficiency to either maintain or enhance the drivetrain performance. By thoroughly evaluating both the electric motor and related mechanical systems, this study contributes substantially to the development of innovative and efficient thermal management solutions that are crucial for advancing EV technologies.The findings indicate considerable potential for improvements in cooling performance, emphasizing strategic approaches for optimizing thermal management while ensuring the drivetrain remains efficient. These developments are vital for supporting the efficient transition to electric mobility, providing sustainable, high-performance vehicle solutions that align with global environmental goals.In conclusion, the integration of CFD and MBD analyses in evaluating EV drivetrain cooling performance represents a significant advancement in sustainable automotive technology development. This research supports environmentally conscious transportation initiatives and addresses current technical challenges while laying a robust groundwork for future innovations in the EV industry. By doing so, it aligns with global efforts to promote sustainable and efficient transportation solutions.
Presented By Spyros Tzamtzis (BETA CAE Systems SA)
Authored By Spyros Tzamtzis (BETA CAE Systems SA)Prabhat Kumar Singh (BETA CAE Systems India Pvt. Ltd.) Irene Makropoulou (BETA CAE Systems SA) Ioannis Charalampidis (BETA CAE Systems SA)
AbstractIn order to keep up with a rapidly evolving engineering landscape, CAE teams seek to improve the efficiency of their workflows and reduce the time to market. The integration of CAD and CAE is a longstanding challenge and a critical aspect of model build. At its core lies the need for a robust and effective bridge between design (PDM/PLM systems) and simulation (SDM systems/CAE tools) that will enable the definition of streamlined workflows for the extraction of CAD data and the seamless creation of CAE structures.More often than not, PDM systems and the CAE tools used for creation of the digital models are disconnected and CAE engineers need to possess substantial expertise in both domains in order to get access to the right data promptly and hassle-free. This, inevitably, limits the extent to which the extraction of the required CAD data can be standardized and automated.At the same time, the digital mock-up (DMU) is organized in sub-assemblies considering several factors, that balance both engineering and manufacturing considerations. However, in the majority of cases, the organization of the DMU is not well suited to CAE models and therefore, the classification of CAD parts into CAE modules plays a pivotal role in the CAE model build turnaround time. The differences in modularization between CAD models and CAE sub-assemblies necessitate a mapping process between the DMU structure and the CAE modules. This work presents an overview of a seamless framework for streamlined transition from CAD to CAE structures for optimized product development. It features a direct interface to the PDM for DMU downloads, offering an efficient gateway to CAD structures directly from within the SDM system. Template-driven methods are utilized for the automatic creation of CAE structures, which can be previewed in the embedded viewer and corrected if needed. In addition, CAE structures can be created manually by visually identifying sub-assemblies in the viewer. AI-driven CAE structure creation is also supported, leveraging previously created CAE structures. The framework ensures that only new CAD files related to the structure are downloaded from the PDM system, and it facilitates their translation to pre-processing-ready files either locally or on a remote server. Finally, it can directly check for part-level updates of CAE modules in the PDM system, enabling a quick estimation of upcoming workload for the next update, or even a shortcut for rapid model updates.
Presented By Romain Klein (Rescale)
Authored By Romain Klein (Rescale)Carlos Mecha (Rescale)
AbstractHigh-performance computing (HPC) continues to play a pivotal role in driving innovation across industries, from design and simulation to AI-enabled engineering. As workloads increasingly shift between on-premises and cloud environments, orchestrating hybrid HPC systems has become essential to ensure performance, cost-efficiency, and scalability. This presentation will delve into strategies for addressing data gravity challenges and implementing robust data management solutions in hybrid HPC architectures.A central focus will be on the evolving role of data strategies in HPC environments. The discussion will cover the importance of tiered storage strategies, highlighting how balancing fast-access storage tiers with cost-effective archival solutions can optimise operational costs without sacrificing performance. Emphasis will be placed on the critical role of Simulation Process and Data Management (SPDM) and Simulation Process and Data Resource Management (SPDRM) systems in ensuring data provenance, reproducibility, and accessibility for downstream use cases.One of the key innovations to be discussed is the Data Lake Exporter (DLE) solution. This approach automates the transfer of HPC workloads executed in the cloud to archival cold storage, enabling organisations to minimise costs while preserving simulation results for future use. Furthermore, this process begins the transformation of raw simulation data into AI-ready datasets, opening pathways to leverage machine learning and AI for predictive modelling, optimisation, and insight generation.The presentation will also explore best practices for metadata capture to ensure simulation data is not only stored but also structured for effective retrieval and AI utilisation. Real-world use cases and insights from deploying hybrid HPC solutions across diverse industries will provide context for the strategies shared.Attendees will leave with a deeper understanding of how to design and manage hybrid HPC environments that seamlessly integrate compute, data management, and AI workflows while addressing sustainability goals and operational efficiency. This session will be particularly valuable for engineering leaders, data scientists, and HPC practitioners aiming to harness the full potential of hybrid architectures in their simulation and AI endeavours.
15:40
Authored & Presented By Frank Günther (Knorr-Bremse)
AbstractTraditional Computer Aided Engineering emphasizes the use of simulation as a preparatory activity before verifying and validating a product through hardware testing. The main benefit of simulation is to speed up product development in a “first time right” paradigm where a hardware driven product V&V phase is expected to confirm what is already known through computer simulation.In many industries, for example Railway and Automotive, this hardware driven product V&V phase constitues a major share of the overall product development effort, as safety requirements are very high and other phases of product development have been streamlined and optimized using simulation.In other industries, for example Aerospace and and Nuclear, cost-prohibitive and, in some cases, impractical hardware testing has led already led to a large share of well-established virtual product V&V procedures.More and more, the Automotive and Railway industries desire to establish strategies for Virtual Product V&V as well. The task is to define virtual V&V processes that provide at least the same level of assurance and certainty as the established hardware driven processes.For this, it is necessary to quantify the uncertainty of simulation results and compare it to the acceptable, but usually unknown, uncertainty of established hardware based V&V procedures. Perhaps quantifying the certainty or assurance of a V&V procedure would be more to the point, but we use the established term “Uncertainty Quantification (UQ)”.We will present several application application examples that adhere to the following pattern:1) Quantify the uncertainty of an established, hardware-based V&V process2) Validate the simulation model3) Quantify the uncertainty of the validated simulation model4) Propose a virtual product V&V process with equivalent uncertaintyIt is important to note that, due to the need to validate the simulation models, hardware testing still plays an important role in product V&V. However, the use of simulation enables more efficient and flexible use of hardware testing, resulting in faster, more efficient product V&V.While we acknowledge that there is still a long way to go before simulation is fully leveraged in product V&V, we hope to provide some useful ideas and guidance to those who wish establish a strategy for Virtual Product V&V in their field.
BiographyResponsible for CAE at Knorr-Bremse Rail in Munich, Germany. Probabilistic Simulation and Open Source advocate. - 1995 Graduated with diploma in Mechanical Engineering from University of Stuttgart, Germany - 1996-1998 Doctorate in Mechanical Engineering with focus on numerical methods, Northwestern University, Evanston, Illinois, USA - 1998-2004 Senior crash simulation expert and project leader at Daimler / Mercedes in Stuttgart, Germany - Since 2004: Responsible for CAE and numerical simulation at Knorr-Bremse in Munich, Germany. Currently: Director of Virtual Testing and Simulations
Presented By Paul Mc Grath (Neural Concept Ltd.)
Authored By Paul Mc Grath (Neural Concept Ltd.)Michael Straub (AVL Deutschland GmbH)
AbstractIn today’s highly competitive landscape, engineering organizations face increasing pressure to deliver complex, high-performing products at an accelerated pace, all while managing tighter margins. To maintain an edge, teams must shorten design cycles and enhance product performance and quality simultaneously. However, traditional simulation and optimization tools often fall short of meeting the demands of today’s fast-moving production environments. Design teams struggle to fully utilize the insights from these tools and seamlessly integrate them into their workflows. Engineering Intelligence (EI) represents a significant advancement, leveraging modern AI techniques to overcome these challenges. EI enables a paradigm shift by offering multi-physics and multi-component generative design, more flexible automation, and the ability to harness historical data to expedite simulations. Adopting EI offers a transformative approach, integrating simulation, CAD, and engineering data into an intelligent, cohesive platform that accelerates the product development lifecycle from concept to market. By drawing on enterprise engineering knowledge and existing processes, EI facilitates near-autonomous design, significantly reducing development time. At the heart of this transformation is a new breed of engineers—the CAE Data Scientists—who blend simulation expertise with data science skills. To address the engineering challenge of accelerating multi-hour modal analysis, Neural Concept in collaboration with AVL developed a 3D deep learning surrogate model to predict results within seconds. The model takes 3D bracket geometries (used in engine compartments) as input and predicts the high-fidelity 3D stress field (in MPa) and the first four frequency modes. Using 62 training samples and 11 testing samples, a proprietary 3D deep learning architecture based on geodesic convolutions was trained to replace simulations for new geometries. Key results demonstrate exceptional accuracy, with frequency mode errors below 0.2 Hz (well within the 1 Hz workflow threshold) and stress errors averaging below 2 MPa, with extreme value errors under 3 MPa. An ablation study revealed that as few as 10 training samples would suffice for this level of precision. By leveraging data science tools in engineering and upskilling engineers as CAE data scientists, significant gains in efficiency and design capabilities can be achieved. A key limitation of this approach is the model’s reliance on the boundaries of its training design space, which constrains its applicability to broader scenarios. Overcoming this challenge requires not only a deeper understanding of the data and problem space but also the deployment of advanced engineering intelligence tools such as generative design and automated re-simulation. AVL is planning to implement these tools at scale, to expand the model’s robustness and adaptability, enabling it to address a wider range of use cases effectively. Moreover, this case study highlights the transformative role of workforce upskilling in engineering. The integration of AI into CAE workflows requires a new breed of professionals: CAE data scientists capable of bridging engineering expertise with advanced data science techniques. By fostering such skills, we can unlock the full potential of data science for a wider variety of engineering challenges. The project highlighted efficient onboarding, with a young engineer at AVL rapidly building their own engineering intelligence workflows.
Presented By Konstantinos Anagnostopoulos (BETA CAE Systems India Pvt. Ltd.)
Authored By Konstantinos Anagnostopoulos (BETA CAE Systems India Pvt. Ltd.)Irene Makropoulou (BETA CAE Systems) Dimitrios Daniil (BETA CAE Systems)
AbstractIn today’s fast paced field of engineering analysis and simulations, the use of reduced models has proven to be crucial across various disciplines, particularly in aerospace and automotive for NVH and durability analyses. In numerical simulations, reduced models are computationally efficient mathematical representations derived from physical or analytical models using Model Order Reduction (MOR) techniques. Their primary purpose is to decrease the computational time and memory required to simulate complex analytical system models. Common types of reduced models in Finite Element Analysis (FEA) include modal models (from modal reduction), Nastran superelements, and Frequency Response Function (FRF) models, which can be derived from either analytical or measured data, among others. In practice, reduced models are implemented in a modular way, allowing analysts to focus on components of interest. These components are represented in detail during simulations, while the influence of surrounding components is accounted for using their computationally lighter reduced representations. This modular approach enables efficient structural and dynamic analyses by significantly reducing computational demands without compromising accuracy.Inherent challenges emerge with the development of reduced models. The definition of reduced models requires expert knowledge and is often the result of a try and error approach to achieve a balance between simplification and accuracy in encapsulating the complete model’s main characteristics. Then, the creation of alternative representations of reduced models, as well as the creation of new reduced model versions for every modification of the detailed FE-representation of the model, unveil another inherent weakness of the standard practices; The lack of traceability, that completely hinders collaboration between simulation engineers.The implementation of a Simulation Process and Data Management (SPDM) system represents a transformative approach to managing simulation workflows and data. On the data management side, an SPDM system streamlines the creation and handling of reduced models within a collaborative environment. By providing structured “recipes” for generating various reduced model types and maintaining traceability of source models and dependencies, it ensures end-to-end transparency. This enables seamless tracking from the digital mock-up to full FE-representations of sub-assemblies, and further, to reduced models and simulations. Such traceability simplifies the definition and analysis of simulation variations, allowing effortless transitions between full FE and reduced representations in the pre-processor. Moreover, the system significantly enhances "what-if" analyses and optimization processes by properly documenting simulation iterations, enabling clear and comprehensible comparisons between variations. Furthermore, during the early phases of product development, when detailed designs are not yet available, the system offers access to the complete library of reduced models from past models, that can be used for some early CAE verifications.On the process management front, SPDM systems facilitate the standardization and automation of CAE workflows. These capabilities optimize the generation of reduced models by automating the submission of reduction runs to the solvers on the HPC systems, followed by systematic storage of results linked to their respective reduced models. This work highlights the advantages of integrating reduced model methodologies within an SPDM framework. Real-world case studies illustrate how SPDM systems address critical challenges in simulation model preparation and analysis, including data sharing, integrity, traceability, and version control. Ultimately, the findings demonstrate that adopting SPDM systems not only accelerate the end-to-end simulation process but also fosters innovation and productivity across engineering teams.
BiographyHe is an Application Engineer with the area of expertise being in the Simulation Process and Data Management and holds a Master of Science in Artificial Intelligence. With a robust academic background and practical experience, his is committed to contributing to the development and optimization of simulation applications by improving processes, developing tools and conducting research and analysis to drive innovative solutions.
Authored & Presented By Hardy Krappe (PDTec AG)
AbstractIn today's product development landscape, a technology or functional data management system is crucial for the entire product lifecycle. Technology data or functional data, which emerges early as requirements and target data, is continuously refined throughout various development phases. In particular, simulation plays a vital role by utilizing these data from measurement series, HiL tests, and field trials to generate numerous load cases and create new technology data. Additionally, technology data is found in systems engineering as parameter sets that assist in configurations and functional models. With upcoming regulations like the Digital Product Passport, the collection and management of these data for achieving sustainability goals will become increasingly important. But not only in systems engineering, but also in many application cases across various disciplines, technology data is utilized and increasingly appears as its own data type. This raises the requirement for a technology data-oriented data management system that manages the data not only for product development but also for later lifecycle phases, providing it in a context- and requirement-appropriate manner throughout the entire circular economy. A technology data management system encompasses several key components such as seamless, highly integrated and easy data collection and integration from diverse sources, standardization of data formats for compatibility, and a normalized standard data storage procedure, advanced analysis and visualization tools for extracting and visualization of technology data items such as curves or matrices, as well as robust lifecycle tracking capabilities that monitor each technology data item from request to delivery through to end-of-life disposal. Establishing feedback loops between the different phases of the product lifecycle ensures continuous improvement, while effective regulatory compliance mechanisms help organizations adhere to stringent environmental standards.The presentation will highlight the benefits of an effective technology data management system and demonstrate how companies can optimize their processes, for example, within the framework of a processing procedure.
Authored By Alain Tramecon (ESI Group)Sumeet Singh (EM Works)
AbstractMeeting the sound quality requirements for an electric vehicle poses different challenges compared to vehicles with internal combustion engines. In an EV, the absence of engine noise does not allow to define a specific character to the vehicle. In addition, the reduced background noise at low speed may not mask the noise from accessories making them more noticeable. More importantly, the noise signature of an EV powertrain is completely different as the electric motor, the power control and the gearbox generate tonal noise and the tones are at high frequency and can then be perceived as more annoying.This paper investigates the electromagnetic performance, and NVH (Noise, Vibration, and Harshness) characteristics of a 48-slot/8-pole interior permanent magnet (IPM) motor specifically designed for automotive applications. Utilizing advanced virtual modelling and simulation techniques, the study conducts a comparative evaluation of rotor configurations with even and uneven magnetization under varying eccentricities – static, dynamic, and mixed. This methodology provides a comprehensive understanding of how magnetization variations affect motor behavior and force distribution within the motor.The forces are then used to load a structural finite-element model of the housing of the electric drivetrain unit supported on grounded mounts to predict the noise and vibration performance. The radial and tangential electromagnetic forces are applied on each stator tooth and the surface velocities of the housing are predicted to calculate the radiated noise at the fundamental and harmonic frequencies of the rotor angular velocity. The simulation process allows to optimize the design of the electric motor to balance the performance requirements and the noise radiation before building any prototype. In addition, the comparison of the radiated sound power for all the configurations simulated demonstrates the effect that uneven magnetization as well as rotor eccentricities can have on the noise and vibration performance of the electric drivetrain unit.
Presented By Wolfgang Korte (PART Engineering GmbH)
Authored By Wolfgang Korte (PART Engineering GmbH)Marcus Stojek (PART Engineering) Johannes Kaldenhoff (PART Engineering) Timo Grunemann (SKZ - KFE GmbH) Ruben Schlutter (SKZ - KFE GmbH)
AbstractPlastic components for technical applications are often reinforced with short fibers. For the simulation of such components, the direction-dependent mechanical material properties must be taken into account. This is done by using anisotropic material models. The required model parameters are usually determined using an iterative re-engineering approach by comparing experimentally determined and simulatively predicted test data. For this purpose, the stress-strain curves determined from a tensile test using a specimen with a known fiber orientation are compared with the simulated predicted results and the model parameters of the anisotropic material model are adjusted iteratively until sufficient agreement is achieved. This calibration process is comparatively complex. It requires the execution of an injection molding simulation for the specimen to determine the local fiber orientation, which is manufacturing-dependent. Furthermore, a structural simulation of the tensile test for this test specimen is required. In industrial practice, this calibration process is often limited for various reasons: the responsible employee only has access to a structural simulation program, but not to an injection molding simulation program, lack of expertise or experience to determine the model parameters in a physically meaningful way, time pressure etc.This article presents a method based on machine learning (ML) that makes it possible to carry out the outlined calibration process largely automatically without having to use conventional FEM and injection molding simulation programs. For this purpose, artificial neural networks and decision tree-based ML models are trained for the regression of target variables. In order to generate training data, injection molding and structural simulations are carried out for characteristic plastic material groups and typical specimen geometries. The rheological and mechanical material properties and geometric ratios are varied. The trained ML models can completely replace the numerical simulations. This means that the entire calibration process can be carried out autonomously by the responsible employee.A retained test data set that was not used for training was used to validate the performance of the ML models. Initial results currently available show very good agreement between experimental and predicted stress-strain curves.
BiographyWolfgang Korte is Managing Director at PART Engineering - a German-based CAE software company. In addition to his duties as managing director, he is responsible for customer communication and strategic market monitoring. He is regularly involved in R&D projects, publishes technical articles and has been active in the simulation world for more than two decades. He studied mechanical engineering at Aachen Technical University, Germany, where he also received his Ph.D.
Presented By Florian Dirisamer (dAIve GmbH)
Authored By Florian Dirisamer (dAIve GmbH)Markus Thurmeier (Audi AG)
AbstractFast charging of electric vehicles (EVs) presents a critical challenge for the evolution of sustainable electromobility. High charging currents introduce thermal management issues, which can lead to efficiency losses, battery degradation, and increased safety risks. Traditional methods to address these issues rely on highly detailed simulations to analyze and optimize thermal behavior during charging. However, these simulations are computationally expensive and time-consuming, creating a bottleneck in efforts to improve charging efficiency. To address these challenges, integrating artificial intelligence (AI) into model reduction techniques has emerged as a promising solution, enabling a faster and more efficient approach to thermal analysis.Model reduction simplifies complex models into streamlined versions that retain essential information, facilitating faster computation without compromising accuracy. Historically, model reduction has been accomplished using mathematical and engineering methods such as proper orthogonal decomposition (POD) and balanced truncation, which reduce model complexity by identifying and retaining the most significant components. However, these traditional approaches require extensive expertise, are computationally costly, and are often limited when applied to highly nonlinear systems, such as those encountered in EV thermal management.The integration of AI, particularly through machine learning (ML), brings an innovative edge to model reduction, providing an adaptable and data-driven approach. Unlike conventional methods that rely heavily on predefined mathematical frameworks, AI-driven model reduction learns patterns from data, making it especially useful for managing nonlinearities. By training on simulation outputs, ML algorithms identify underlying relationships in the data, capturing complex interactions in the charging process. Neural networks, for instance, excel at autonomously extracting relevant features, eliminating the need for labor-intensive manual feature selection and making the reduction process more efficient.One of the key advantages of AI-based model reduction lies in its ability to optimize complex systems dynamically. For instance, when applied to fast-charging EVs, AI-driven models can predict thermal behavior in real-time and adapt the charging process to manage temperature rise effectively. This reduces the need for extensive simulation recalculations while preserving accuracy. AI algorithms can rapidly generate reduced models that streamline computation, enabling real-time control and decision-making during the charging process, which ultimately improves charging efficiency and battery life.Another notable benefit of AI-driven model reduction is its ability to incorporate innovative thermal management solutions, such as the use of phase change materials (PCMs). PCMs offer effective thermal regulation by absorbing and releasing heat during phase transitions, making them ideal for managing the high temperatures generated during fast charging. By integrating PCMs into the charging cable and optimizing their thermal properties through AI-reduced models, significant thermal losses can be mitigated, resulting in improved temperature control. This reduces the need for external cooling mechanisms, minimizes energy loss, and enhances charging speed without compromising safety or efficiency.This presentation will delve into the practical applications and benefits of AI-assisted model reduction for optimizing EV charging. By reducing model complexity while maintaining predictive accuracy, AI-driven approaches bridge the gap between theoretical simulation and real-world implementation. The use of AI in model reduction transforms traditional simulation workflows, offering a scalable, data-driven solution that enhances the sustainability and efficiency of EV fast-charging technology. This innovative synergy of AI and model reduction presents a valuable pathway toward achieving a more sustainable, efficient, and scalable future for electromobility.
16:00
Authored & Presented By Kambiz Kayvantash (Hexagon)
AbstractTraditional engineering computations require data representing the loading, material properties and geometry. In general these are provided in form of numbers (scalars, matrices, CAD ). Recent advances in machine learning and associated technologies have allowed us to explore the feasibility and limits of various other type of data, which were not fully exploited up to now. In particular we are now capable to acquire, thanks to huge advances in the sensing technology, various other types of data such as geometrical features and forms, categorical (colors, Booleans, groups, etc.), images, CT scans, radar or lidar signals and sound. In this paper we intend to present some important applications and underlying methods allowing to conduct everyday engineering modelling tasks in a much more powerful manner resulting from various machine learning techniques allowing for increased performance and reduced cost of model preparation.For characterization of images many solutions are at hand ranging from basic PCA (SVD) type solutions to CNN (Convolutional Neural Networks) and more recently Unet approaches which can be assimilated to deep learning. Transfer learning has also allowed us to explore existing data bases as complementary support for learning. Numerous variants of the above exist too. The only problem with these solutions lies in the fact that they are highly customized and further adjustments per application. In this paper we will present a global framework allowing for the generalization of many apparently different applications under a unified approach. In particular we shall demonstrate how various forms of data, which we refer to as information, can be handled in a systematic way, applying nearly identical feature extraction and prediction methods.For the sake of demonstration, we shall present five different cases: 1) A structured data matrix with prediction of time dependent outcome, 2) A CAD model used for cost estimation within a CNC application, 3) An image based fault detection solution and 4) An image based stress field prediction and finally 5) A fault detection based on sound recordings.
Presented By Danilo Di Stefano (Esteco)
Authored By Danilo Di Stefano (Esteco)Alessandro Viola (ESTECO SpA) Simone Genovese (ESTECO SpA)
AbstractThe energy required to run AI tasks is already accelerating. The World Economic Forum estimated an annual growth rate between 26% and 36%. The energy consumption poses significant sustainability challenges, as the environmental impact of intensive computational processes continues to grow. This also inevitably impacts engineering design optimization methodologies aimed at achieving optimal design performance within constraints. It often necessitates numerous simulations or experiments which can be computationally intensive and time-consuming, especially for complex systems. AI/ML plays a key role in providing computationally efficient surrogate models that allow for more rapid exploration of design spaces. Therefore, even in the context of engineering design optimization, improving the efficiency of AI/ML models is not just a technical necessity but also an urgent environmental imperative. We propose an approach based on an integrated simulation workflow with Design of Experiments (DOE), Response Surface Models (RSM), and Reduced Order Models (ROM). A key component is the reuse of existing data. By repurposing relevant datasets from previous projects, we can significantly reduce the need for new simulations or experiments, thereby conserving computational resources. This not only enhances sustainability but also accelerates the training and validation process of AI/ML models. Even when data is not available, smart incremental and adaptive Design of Experiments (DOE) strategies help to generate a well-distributed set of designs within the input domain space, capturing the underlying behavior of the system accurately with a minimal number of simulations or experiments. Next, RSM models are built to explore different model types and parameters. This exploration helps in finding the most suitable approximation for specific problems and our integrated approach fosters faster model parameters tuning and cross-validation of candidate models. For more complex systems, we implement ROM techniques based on Proper Orthogonal Decomposition (POD). POD-based ROMs offer significant advantages in dimensionality reduction, crucial for handling high-dimensional data, where the curse of dimensionality poses several challenges. By projecting the full- order model onto a lower-dimensional subspace, POD-ROMs drastically reduce computational requirements while maintaining accuracy. Furthermore, POD-based ROMs enhance the interpretability of AI/ML models. By identifying the most significant modes of the system, POD provides insights into the underlying physics, making the AI/ML predictions more transparent and easier to validate. This interpretability is crucial in engineering applications where understanding the model's decision-making process is as important as its accuracy. The integration of DOE, RSM, and ROM into an automated simulation workflow accelerates various tasks, including design exploration, sensitivity analysis, and optimization calculations.We demonstrate the efficacy of this approach in optimizing business jet performance, by exploiting a multidisciplinary approach that encompasses both fluid dynamics and electromagnetics, along with their interactions. This study was made possible only by using AI/ML methods. The proposed methodology successfully addressed the efficiency challenges in AI/ML models through several mechanisms:1. Data reuse and smart DOE reduce the need for extensive new data generation.2. Workflow automation enhances RSM model parameter tuning and validation.3. POD-based ROMs offer dimensionality reduction, crucial for handling high-dimensional data, thereby reducing memory requirements and computational complexity.4. The enhanced interpretability of POD-ROMs allows for more efficient model refinement and validation, reducing the need for extensive trial-and-error processes in AI/ML model development.In conclusion, this paper presents a comprehensive approach to enhancing AI/ML efficiency in multidisciplinary engineering design optimization. By leveraging DOE, RSM, ROM, and data reuse strategies, we demonstrate significant improvements in computational efficiency, addressing both the technical and environmental challenges posed by the increasing energy demands of AI tasks.
Presented By Sebastian Ciceo (Siemens Digital Industry Software)
Authored By Sebastian Ciceo (Siemens Digital Industry Software)Raluca Raia (Siemens Digital Industry Software)
AbstractRotor notching is an established machine-design procedure for mitigating electrical machine noise and vibrations. The air-gap forces are shaped by introducing rotor notches to reduce specific temporal harmonics that excite the machine structure and create mechanical vibration orders. This method has the same effects as the skewing technique. Additionally, the rotor notches can reduce permanent-magnet synchronous machine (PMSM) torque ripple by acting on the tangential air-gap force harmonics responsible for the torque fluctuations. This paper focuses on the radial-flux PMSM for the electrical machine type due to its overwhelming market share in traction and auxiliary systems vehicle applications.The traditional simulation methodology for rotor notching involves a 2-dimensional (2D) finite element (FE) electromagnetic analysis of the machine cross-section to extract the air-gap forces for a specific notch configuration for a few operation conditions, usually the maximum torque operation across the whole speed range.Afterward, the vibroacoustic response is computed. Two main methods are usually employed in the industry. In the first method, an analytical vibroacoustic model (usually of the stator only) is used to compute the response. This represents a trade-off between speed and accuracy, with a penalty applied to the accuracy metric because complex housing, local flexibility, and anisotropic material properties cannot be considered. The second method involves a mesh-mapping operation from the 2D (or 2.5D if skewing is present) electromagnetic mesh to a 3D FE vibroacoustic model of the machine assembly and computation of 3D frequency-domain vibroacoustic responses. This method is accurate (if attention is paid to the conservation of energy and force spectra during the mesh-mapping procedure) but is more time- and data-intensive. In this paper, the air gap forces coming from Simcenter e-Machine Design are computed and tabulated under the Maximum Torque per Ampere (MTPA) operation conditions. The electromagnetic-vibro-acoustic coupling method employs the fast and accurate 3D vibroacoustic reduced-order modeling (ROM) technique in Simcenter 3D using the vibration-synthesis approach. In this way, the vibroacoustic response across the full machine torque and speed range can be determined quickly. This method retains the computational speed advantages of the analytical solution, along with the accuracy of the mesh-mapping approach. Finally, a design of experiments (DOE) using Simcenter HEEDS is conducted for different notch configurations. Different noise maps (vibroacoustic mechanical order responses for the whole operation range) are used to identify key regions of improvement and degradation in terms of NVH using specific KPIs and results are discussed.
BiographyHe is a Senior Engineer - Subject Matter Expert (SME) in electrical machines and drives and Technical Lead at Siemens Digital Industries Software, Leuven, Belgium Combining unique skills and experience in the electromagnetic and vibroacoustic domain with system-level modeling, control design, and optimization to help our automotive and industrial global customers use the Simcenter suite to develop world-class products. My activities are related to: - Researching new methodologies to improve our service offerings and helping define new research project topics - Involved in successful business development activities across different world regions and elaborate Engineering Workflows Academic track: Ph.D in Engineering Science and Technology at the Université Libre de Bruxelles (2024) with the thesis title: System-level Reduced Order Modeling for Automotive Electric Drives.
Presented By Marieme El Ghezal (Hexagon)
Authored By Marieme El Ghezal (Hexagon)Maxime Melchior (Hexagon) Minh Vuong Le (Hexagon)
AbstractFibre-reinforced polymer laminates are nowadays increasingly used in many engineering industries due to their lightweight nature and the possibility to tailor laminate layups to achieve specific desired set of performance targets. Predicting the fatigue performance of structures made with such advanced materials is crucial to the design/analysis cycle, yet it remains challenging due to several reasons among which:(i) the complex microstructure and the high anisotropy of the composite materials, (ii) the variety of the damage mechanisms that occur under fatigue loadings, (iii) the non-linear relationships between the cyclic load amplitude and the fatigue life with a strong effect of the stress ratio and (iv) the complex and multiaxial nature of the loading during service conditions.This paper describes a predictive multiscale computational strategy for the assessment of the fatigue lifespan of composite structural parts made with any ply stacking sequence and subjected to fatigue loadings. The Integrated Computational Material Engineering (ICME) solution that will be presented addresses the above cited challenges.The methodology is built upon a combination of micromechanical and phenomenological models:On one hand, mean-field homogenization enables to accurately estimate the effective response of the composite at the ply level, taking into account the local fibres orientations (e.g. from draping simulation) and the constituents materials properties (elastic, viscoelastic and elastoplastic behavior). On the other hand, fatigue lifetime estimations are built on a cutting-edge, phenomenological multi-axial fatigue damage model that adapts to varying stress ratios. This advanced approach accounts for mean stress sensitivity and spatially varying stress ratio, ensuring precise and reliable predictions for durability under various loading conditions. This multiscale fatigue modelling approach allows to link the manufacturing process and the material’s microstructure with the fatigue performance of the structural components enabling thus engineers to efficiently design optimized, yet safe parts that meet the desired durability.
Presented By Xavier Conqui (Siemens Digital Industries Software France)
Authored By Xavier Conqui (Siemens Digital Industries Software France)Michael Hood (Siemens)
AbstractWith electric drivetrains becoming more common throughout various markets and territories as an alternative to legacy drivetrain power delivery systems, there has been a significant shift in the underlying needs for development and analysis of solutions. Simulations governing such development range anywhere from systems to electromagnetic to multiphase fluid analyses, and each analysis may have bearing on design choices that affect other analyses and cascade throughout the design process. Thus, combining and leveraging such analyses in tandem is a crucial consideration for effective development. This presentation showcases the engineering of a new electric drive for the Simrod vehicle. Simrod is owned by Siemens and based on a vehicle from Kyburz, a Swiss company that develops and produces high-quality electric vehicles for mobility markets and private individuals. It is a two-seats, 600kg electric vehicle with a maximum speed of 120kmh and an expected range of 180km. Engineering for the Simrod electric drive covers the V-cycle of development for a racing variant of the Simrod drive which considers specific constraints relevant for racing conditions, in particular thermal management. A complete system model for the vehicle is used to translate whole-vehicle requirements down to electric drive specific requirements, which include the motor, inverter, and transmission. Details of thermal management simulations are presented, and comparison of various cooling solutions and their corresponding simulation goals and considerations are given. Particularly, a spray cooling approach for thermal management is considered and analysis is presented. Alternative cooling configurations are discussed and weighed as alternatives to the design. Because fluid and thermal simulations typically incur significant computational cost, special attention is paid to simulation speed-ups while maintaining an adequate level of accuracy; additionally, if significant a significant speed-up can be obtained with a tradeoff of accuracy, such analysis is also shown. Several variations of the thermal analysis approach are compared, with general suggestions for simulation methodology being provided both for fluid and solid thermal systems. An overview of feasible approaches is summarized, and their performance is discussed.
Authored & Presented By Young Lee (UL Solutions)
AbstractThe integration of artificial intelligence (AI) into modeling and simulation systems has significantly expanded their capabilities, enabling improved accuracy, adaptability, and efficiency. These systems are increasingly applied in high-stakes domains, including aerospace, healthcare, and industrial processes, where failure can have severe consequences. While AI-powered modeling and simulation systems offer remarkable opportunities, they also introduce unique safety risks, such as model instability, data biases, and unpredictable behaviors. Addressing these challenges is critical to ensuring the reliability and acceptance of these technologies in safety-critical applications.This paper specifies safety requirements and provides guidelines for AI-based modeling and simulation systems, focusing on key safety principles: robustness, reliability, quality management, transparency, explainability, data privacy, data management, and lifecycle management. These principles form a comprehensive framework for mitigating risks and fostering trust in AI systems.Robustness and reliability are foundational to AI safety, ensuring that systems function consistently under both expected and unexpected conditions, producing accurate and dependable results over time. Quality management underpins these principles, emphasizing structured development processes and rigorous testing to minimize systematic errors and ensure adherence to functional requirements.Transparency and explainability address the need to understand how AI systems make decisions and why specific outputs are produced. These attributes are pivotal for building trust among stakeholders, enabling designers, developers, regulators, and end-users to scrutinize and confidently engage with AI systems.Data privacy ensures the responsible collection, storage, use, and sharing of personal information, aligning with regulatory requirements and safeguarding individual and organizational data. Effective data management ensures the secure handling of input and output data while fostering compliance with ethical and regulatory standards. Lastly, lifecycle management maintains the safety, reliability, and compliance of AI models throughout their operational lifespan, adapting to technological, regulatory, and user needs.By integrating these principles, this framework provides a pathway for developing AI-based modeling and simulation systems that are not only innovative but also safe, reliable, and trustworthy. This paper seeks to engage the modeling and simulation community in adopting structured approaches to AI safety, bridging the gap between technological advancements and safety-critical applications.
BiographyDr. Young M. Lee is a Technical leader and Principal engineer for Artificial Intelligence at UL Solutions, where he leads AI safety initiatives. Dr. Lee previously held the position of Distinguished Fellow and Director of AI at Johnson Controls for 6 years, where he led a global team of AI scientists in developing industrial AI solutions utilizing AI/ML and optimization technologies, including energy prediction, energy optimization, fault detection, failure prediction, equipment control, and predictive maintenance applications. Prior to that, Dr. Lee dedicated 15 years to the IBM T.J. Watson Research Center as a Research Staff Member, Research Manager, and IBM Master Inventor. There, he was involved in the development of industrial applications that integrated AI/ML, mathematical modeling, optimization, and simulation. Earlier in his career, Dr. Lee spent over 10 years at BASF, a chemical company, where he established and led the Mathematical Modeling Group, driving the development of numerous AI, optimization, and simulation models for various manufacturing and supply chain processes. Dr. Lee earned his B.S., M.S., and Ph.D. degrees from Columbia University. He has published six book chapters and over 70 refereed technical papers. As a prolific innovator, he has filed over 100 patent applications and holds more than 50 issued patents.
Presented By Asparuh Stoyanov (Key Ward)
Authored By Asparuh Stoyanov (Key Ward)Suman Sudhakaran (TriMech) Farid Benvidi (TriMech) Chris Duchaine (TriMech)
AbstractThis project demonstrates a no-code methodology for building surrogate models for engineering simulation. Using such methods, physics simulation analysts can tap seamlessly into the potential of surrogate models, transforming traditional simulation workflows to be more efficient and flexible. In this abstract, we present a workflow of how to use simulation result data to build a 3D surrogate model that any analyst can utilize without requiring programming skills—enhancing the usability of AI-driven simulation tools for broader adoption.Finite Element Method (FEM) simulations are often computationally intensive and challenging to scale, especially for complex structural applications. Our methodology minimizes these resource-heavy processes with a graph-based surrogate model optimized for computational efficiency. To achieve this, we utilized automated extract, transform, and load (ETL) workflows to process the raw simulation data into a shape and format suitable for AI ingestion. We show how, through no-code data processing automation, analysts can focus on deriving insights rather than getting lost in technical details.The dataset used comprised linear static analysis results of a Press Bench model, performed using SOLIDWORKS Simulation. Parametric variables included back height, feet width, and plate length, and the results predicted were displacement and stress. Using data processing and management tools, we first extracted and converted the surface field and volumetric field data, from the original raw format into an open-source “AI-ready” format (. csv,.vtk). This allowed us to gather all simulation data in one place to better understand the data distributions, patterns, and correlations between variables. In the next step, we cleaned the collected data while maintaining different data versions and keeping track of changes. As a final step, using the cleaned and processed dataset, we trained a Graph Neural Network. The model was trained to predict accurate stress and displacement fields within seconds (>90% accuracy), using the 3D volume mesh data as inputs. The whole process from raw data to a trained model took approximately one workday to develop. The same approach will be tested on large deformation nonlinear structural analysis.This project demonstrates how structural simulation data can be used to build surrogate models that accelerate the design process. Advances in AI modeling tools now make these models widely accessible, enabling engineers to leverage physics simulation data without coding or deep machine learning expertise—expanding the possibilities in product design optimization.
AbstractSimulation and physical testing are often considered as alternative approaches for investigating novel manufacturing processes, but using them in combination can be powerful and cost-effective when developing, adjusting or debugging a joining application procedure. In this paper, investigations into thermal joining processing are described in which thermo-mechanical manufacturing process finite element simulations have been used to enhance the value of experimental investigations and to reduce the time and cost to understand the technique and its outputs. Solid-state processes can be used for joining similar or dissimilar materials, which are often metals, and these processes generally operate at elevated temperatures and under very high pressures; they can have a number of other process parameters too. These processes are often hidden inside protective vacuum or inert atmospheres and/or safety enabling enclosures, and the processing environment can make monitoring techniques challenging or even impossible, making it difficult to measure the actual loads and other process parameters during experiments, and to quantitatively or qualitatively assess the effect of varying those process parameters. Finite Element Analysis (FEA) has been found to be very useful in addressing this problem by using simulations to ‘look inside’ the equipment during the operation to aid visualisation and technical understanding. Rather than attempting to model atomic scale joint formation, a pragmatic approach has been adopted in which simplified finite element models of the process and testing arrangement have been developed to simulate the behaviour of the setup and work piece material during processing. These models have been applied to compute the relevant process parameters, and in this way, the observed results can be related to the material behaviour, allowing the tests to be better interpreted without the need for extensive further practical experiments. This hybrid modelling-testing approach has proved to be very promising, saving time and cost when developing and applying advanced manufacturing processes.
Presented By Navin Bagga (Rescale)
Authored By Navin Bagga (Rescale)Madhu Vellakal (Rescale)
AbstractThe growth of computer-aided engineering (CAE) tools and simulation data offers new opportunities for engineering teams to develop new products faster. However, challenges persist due to fragmented workflows, siloed simulation data management, and inefficient manual metadata processes. Engineers often spend up to 30% of their time managing data instead of making critical design decisions, impacting time-to-market, operational costs, and the ability to deliver optimal solutions.Despite its many advantages, the use of CAE in the product development process presents several challenges. One major hurdle is the high computational power and specialized software required for advanced simulations. Additionally, accurately modeling real-world conditions in a virtual environment is complex and may not always capture all the nuances of physical behavior, leading to discrepancies between simulation results and real-world performance. CAE also relies heavily on accurate material data and boundary conditions; any errors or assumptions in these inputs can lead to inaccurate predictions. Lastly, while CAE speeds up development by reducing the need for physical prototypes, it may still require validation through physical testing, which can introduce delays and costs. These challenges highlight the need for continuous improvement in CAE technologies and the expertise of engineers to fully harness its potential.Artificial intelligence (AI) is emerging as a transformative tool in addressing the challenges encountered in running CAE simulations for various applications. AI-powered surrogate models can efficiently approximate complex CAE simulations, enabling rapid evaluations of design alternatives and scenario analyses. Moreover, AI techniques, such as deep learning, facilitate data-driven approaches for enhancing simulation accuracy, predicting aerodynamic behaviors, and identifying critical flow phenomena. Additionally, AI-driven optimization algorithms can efficiently search vast design spaces to identify optimal configurations and performance-enhancing parameters for defense systems. By leveraging AI technologies, researchers and engineers can overcome computational bottlenecks, expedite design iterations, and unlock new insights into aerodynamic phenomena, thereby advancing the development of next-generation products with improved performance, efficiency, and mission effectiveness.Graph neural networks (GNNs) architectures have emerged as a powerful approach for handling complex data structures and are finding increasing utility in CAE simulations. By representing the computational domain as a graph, with nodes corresponding to grid points and edges denoting connections, GNNs can capture intricate relationships and dependencies inherent in complex physical phenomena. This representation allows GNNs to effectively model spatial interactions, boundary conditions, and turbulence effects, leading to more accurate and efficient simulations. GNNs offer the ability to learn from large-scale datasets, enabling data-driven approaches for optimizing simulation parameters, predicting flow behavior, and identifying critical flow features. In this session, we will talk about how a GNN architecture is deployed for modeling the complex behavior of bipolar plates of PEM (Proton Exchange Membrane) fuel cells. Modeling fuel cells involves complex FEA and CFD methods. Geometry preparation for the FEA process is human intensive and solving the FEA simulation takes a minimum of 48 hours on 100s of CPUs. The deformed geometry from the FEA simulation is processed into a CFD model for the flow prediction. Using a surrogate model approach we will demonstrate how we can predict the structural deformation of the geometry starting from the CAD model. This presentation explores a framework to streamline multidisciplinary simulation workflows by integrating digital thread concepts and AI-driven methodologies. By unifying historical modeling and simulation data, automating metadata capture, and leveraging AI for optimization, this approach significantly enhances collaboration, decision-making, and productivity.The framework addresses key engineering challenges by introducing centralized data structures, automated processes, and AI-assisted modeling. A digital thread captures the complete lifecycle of simulation workflows, providing traceable and actionable insights across teams and disciplines. This ensures that data is not only accessible but also actionable, enabling engineers and decision-makers to make informed choices that accelerate development and improve product outcomes.Key topics of the presentation include:Engineering Problem: The complexities of managing multidisciplinary CAE workflows due to disconnected tools, fragmented datasets, and manual processes. These challenges are particularly critical in industries such as automotive and aerospace, where thermal, structural, and aerodynamic simulations must converge seamlessly for optimal product development.Methods: A detailed explanation of how a centralized simulation data management system, such as a hierarchical framework (project > folder > study > job), can organize and share simulation files and metadata effectively. Coupled with AI models, these systems enable predictive insights, simulation acceleration, and iterative design optimization. Results: Case studies from automotive and aerospace industries (such as General Motors Motorsports, Boom Supersonic, Denso Manufacturing, Hankook Tire, and more) will demonstrate the impact of this approach. Examples include reducing simulation runtimes by up to 50%, improving simulation throughput, and enhancing collaboration across global engineering teams. These results underscore the potential of AI and digital thread technology to address bottlenecks, improve resource utilization, and foster innovation.Business Outcomes: The implementation of these methodologies has led to business impact, such as faster product development, reduced operational costs, enhanced sustainability, and higher-quality designs. By enabling real-time collaboration and decision-making, these solutions empower teams to meet customer demands more effectively.The session will feature modern modeling and simulation applications, including a demonstration of AI-enhanced workflows and digital thread capabilities. Attendees will learn how these tools can be leveraged to modernize their simulation processes, enabling faster innovation cycles and delivering superior engineering solutions.Through this presentation, participants will gain actionable insights into modernizing CAE workflows and integrating advanced technologies to remain competitive in a rapidly evolving engineering landscape. This approach bridges the gap between traditional engineering methodologies and future-forward practices, equipping teams with the tools needed to navigate increasingly complex design challenges with confidence.
Authored & Presented By Søren Thalund (Grundfos)
AbstractAt Grundfos, mechanical vibrations and acoustic noise has an increasing focus. As new products have increasing power density and variable speed, dynamics and noise are a main concern in development projects. Since dynamics in such complex systems are difficult to model and understand, dynamic issues are unfortunately often found too late. Centrifugal pumps experience a very complex dynamic load pattern which depends both on rotational speed, duty point, pump media, temperature and assembly tolerances. The load spectrum includes distinct frequency loads, such as unbalance, blade-pass load and EM-motor loads, but also broadband hydraulic loads from turbulence and recirculation. To predict noise and vibrations from products at distinct operational frequencies using simulation, a very high level of accuracy and fidelity is required. Many modelling aspects such as interface characterization, accurate material models, added mass effects, FSI, system damping, and dynamic forces must be mastered to expect reasonable accurate results. Mastering all these aspects at once is very difficult on a product level, hence we seek a way to model and interpret result which can still provide usable result although uncertaintiesfrom the before mentioned modelling aspect is practically unavoidable. To achieve this, an approach for hydraulic force modelling is presented in this paper.The paper presents an approach to model blade-pass loads and pressure pulsations. The load calculation is based on a geometrical approach and the load magnitude and shape can be tuned to match CFD, empirical or experimental results. The mapping of the load cases is done in such a way that loads represented as unit-loads can be combined and scaled to match many different load scenarios, which cover both different operational points and cases, but also variance due to production and assembly tolerances. Results are processed as waterfall plots which makes it possible to interpret result in a less error-prone way and to comprehend the response of a whole response range at once. Using this approach, we are able to model dynamic response of large assemblies and complex products and process the result in a way which reveals the dominating dynamics leading to vibration and/or noise issues. A case is presented where experimental and simulation results are compared.
Authored & Presented By Jos Vroon (NLR - Royal Netherlands Aerospace Centre)
AbstractThis study presents a Finite Element (FE) analysis aimed at predicting the thermal and mechanical behaviour of a repair conducted on an impeller part utilizing direct energy deposition additive manufacturing technique. The impeller, exhibiting wear on its outer edge, undergoes repair through the aforementioned technique. This repair is performed using a Beam Modulo 400 DED machine. The primary objective is to assess the deformation of the impeller post-repair, with a focus on minimizing excessive deformation.The FE model developed for this research focuses on the manufacturing process to provide insights into the thermal and mechanical responses of the repaired impeller. A key aspect of the analysis involves the calibration of the FE model, both thermally and mechanically, which was achieved through dedicated calibration prints. These calibration prints are used to collect thermal measurements and achieve predictable deformations, thus enabling the refinement of the FE model.Thermal physics plays a crucial role in the repair process, as the direct energy deposition technique involves the localized application of heat to deposit material onto the impeller surface. The FE model aims to simulate the thermal distribution throughout the repair process, enabling the prediction of temperature gradients and potential thermal stresses within the impeller structure.Furthermore, the mechanical aspects of the repair are examined to assess the resulting deformation of the impeller. Excessive deformation can compromise the functionality of the repaired part leading to rejection of the part. Through the FE analysis, parameters influencing mechanical behaviour, are investigated to predict the repair process and resulting deformation.The validation of the FE model is crucial to ensure its reliability in predicting the thermal and mechanical outcomes of the repair process. By comparing simulation results with experimental data obtained from the actual prints, the accuracy of the FE model is confirmed, enhancing confidence in its predictive capabilities.Overall, this research contributes to the advancement of additive manufacturing techniques for repair applications by providing a framework for predicting the thermal and mechanical behaviour of repaired components. The insights gained from this study can be used for the optimization of repair processes, leading to enhanced performance and longevity of industrial components such as impellers.
Authored & Presented By Antonio Baiano Svizzero (Undabit)
AbstractThis work presents the application of the open-source finite element library FEniCSx to practical vibroacoustic problems, highlighting its potential for real-world engineering simulations. FEniCSx’s flexibility, efficiency, and integration with high-performance computing frameworks such as MPI and PETSc make it a suitable choice for modeling coupled acoustic-structural systems. The focus is on demonstrating its capabilities in scenarios commonly encountered in industrial and research settings, including noise control, structural vibration, and sound propagation in complex domains.A key contribution of this study is the development and implementation of a monolithic coupling approach to address fluid-structure interaction problems with non-conformal meshes at the interface. This is achieved using a specially constructed interpolation matrix, enabling accurate and stable coupling of the fluid and structural fields without requiring mesh conformity. The approach maintains consistency in the exchange of variables across the interface while preserving computational efficiency and scalability, making it particularly well-suited for geometrically complex systems. The advantages and disadvantages of this method compared to the partitioned approach is described. The presentation also discusses the challenges and solutions associated with implementing Perfectly Matched Layers (PMLs) for absorbing boundary conditions, enforcing realistic boundary constraints, and optimizing solver configurations for the Helmholtz equation. The performance of direct and iterative solvers in handling large-scale vibroacoustic models is analyzed, providing practical guidance for their selection and configuration in FEniCSx.Case studies illustrate the application of these methods to realistic problems, emphasizing the adaptability and effectiveness of FEniCSx in addressing vibroacoustic challenges. The results demonstrate the framework's ability to deliver reliable and reproducible solutions while maintaining the transparency and cost-effectiveness of open-source tools.This work aims to provide a foundation for utilizing FEniCSx in vibroacoustics, offering insights into its strengths, limitations, and areas for further development. By bridging the gap between academic research and industrial applications, this study contributes to advancing the use of open-source software in vibroacoustic engineering.
Authored & Presented By Laurent Chec (pSeven)
AbstractIntegrating Simulation Process and Data Management (SPDM) systems with Process Integration and Design Optimization (PIDO) platforms offers new opportunities to improve engineering workflows. This presentation highlights the benefits and practical applications of combining these tools to address challenges in managing data, automating workflows, and optimizing designs.SPDM systems help teams work together by organizing data, keeping track of versions, and ensuring clear records of changes. However, they often struggle with automating workflows, using AI, and being easy for engineers to adopt. On the other hand, PIDO platforms are designed to automate engineering workflows, explore designs with advanced methods like optimization and Design of Experiments (DoE), and use Machine Learning / AI to improve designs. By connecting these two systems, teams can overcome these challenges, making workflows smoother and data easier to use.This integration allows for seamless sharing of data, automatic workflow execution, and real-time updates of simulation results. Workflows are version-controlled and can easily adapt as designs evolve. Project managers can run complex simulations directly from their data management system without needing extra tools, while engineers gain access to better automation and tools for exploring design possibilities. Additionally, the system uses stored data to build predictive models, helping teams make smarter decisions and create more effective designs.The presentation covers two real-world examples. In one, engineers create workflows, while the SPDM system takes care of storing and tracking the data. In the second, project managers work directly in the SPDM system, making changes and running workflows without needing to learn a new tool. These examples show how the combined system improves teamwork, speeds up processes, and reduces errors.By bringing SPDM and PIDO together, engineering teams can save time, increase productivity, and improve how they work. This presentation will provide practical ideas for using these tools to make engineering projects more efficient and innovative.
Authored & Presented By Peter Langsten (Predict Change)
AbstractFor the last decade simulation governance has been increasingly recognized as a key contributor to improve engineering simulation. It has been documented in several standards and publications ever since and it has gained broad interest at NAFEMS venues.Simulation governance is a prerequisite for the proper management of numerical simulation practices, and it is often defined as the command and control of numerical simulation. Some of its fundamental principles are relevant for simulation even outside numerical simulation.Expectations for simulation governance as the means for success to innovate simulation have been well documented. On one hand management expects more clarity of its opportunities and needs for innovation in product design, manufacturing and maintenance that can reasonably be achieved by new simulation technology and new ways of working with simulation. On the other hand, simulation governance should provide a guide rail how to implement the necessary changes in engineering simulation to meet set objectives for cost efficiency, reduced physical testing, new staffing requirements and similar aims.In short, simulation governance effort can be strategy focused or way-of-working focused. Naturally, it consists of both and for its successful implementation for increased engineering efficiency it needs both of these comprehensive sets of definitions, analyses, methods and tools in a state-of-the-art, proven and relevant interplay. Yet, experience from communications with industrial engineering simulation professionals shows two separate points of interest: strategy focused vs. way-of-working or implementation focus.This discussion takes a closer look into the benefits of both and builds on previous discussions on simulation governance best practices presented in Indianapolis in 2022 and in Tampa in 2023. Also, it builds on long experience of successful compilation of engineering digitalization innovation strategy and transformation implementation at leading international industrial corporations.Conclusion of this discussion shows the validity and relevance of the statement made by a great thinker of mathematical physics almost 300 years ago: “hypotheses are made in order to discover the truth; they must not be passed off as the truth itself.”Separation of the assumption and the truth in numerical simulation is more critical and possible than ever before as discussed in this presentation and this is where simulation governance plays a unique role today.
BiographyP. Langsten worked at ABB Atom / Westinghouse in Sweden as FEA analyst and PLM/CAE strategist and project manager. For 20 years in charge of PLM/CAE strategy, architecture and implementation consultants, FiloProcess, Sweden as coach, senior consultant for strategy delivery and execution in total for over 100 000 users. Currently in charge of Predict Change as Senior Consultant in Simulation Governance and Industrial digitalization.
AbstractThe convergence of High-Performance Computing (HPC) and Artificial Intelligence (AI) is revolutionizing engineering simulation, marking the dawn of a data-centric era in innovation and optimization. This paper proposes a comprehensive framework for simulation data management tailored to harness the power of AI for engineering applications. The ability to efficiently manage, analyze, and extract insights from high-quality simulation data is pivotal to realizing the transformative potential of this convergence.Central to the proposed approach is the establishment of a centralized data repository, designed to unify diverse datasets, including experimental data, numerical simulations, and third-party resources. Such a repository serves as the foundation for streamlined data organization and access, enabling engineers to manage the growing complexity of simulation data effectively. Leveraging AI-powered analytics, advanced machine learning and deep learning algorithms can be applied to these datasets to identify patterns, uncover insights, and drive data-driven decision-making. This capability significantly accelerates the design and optimization processes while reducing reliance on trial-and-error methodologies.Custom machine learning models play a critical role in this framework, offering tailored solutions for predicting performance metrics, optimizing designs, and automating routine engineering tasks. For instance, in the automotive sector, AI-driven simulations can predict vehicle performance, fuel efficiency, and safety parameters with unprecedented accuracy. This not only shortens development cycles but also enhances the overall quality and competitiveness of the final product.The concept of digital twins is another cornerstone of this approach. By leveraging simulation data to create high-fidelity digital replicas of physical systems, engineers can perform predictive maintenance, optimize system performance, and conduct virtual testing. These digital twins facilitate rapid design iterations, minimize prototyping costs, and accelerate time-to-market. A collaborative environment further enhances this framework, fostering seamless knowledge sharing among engineers and scientists. Such platforms encourage interdisciplinary innovation, amplifying the impact of AI-driven insights across diverse domains.The integration of cloud-based HPC offers a flexible and cost-effective computing solution, enabling engineers to scale resources dynamically based on project demands. This ensures optimal utilization of computational power while maintaining cost-efficiency. Moreover, cloud-based storage provides scalable solutions for managing large simulation datasets, ensuring secure, centralized access for global teams.By adopting this data-centric strategy, industries such as automotive, high tech, life sciences, and manufacturing can unlock the full potential of AI to accelerate innovation, improve product performance, and reduce development costs. This framework exemplifies the synergistic possibilities at the intersection of AI, HPC, and engineering simulation, paving the way for a transformative future in digital engineering.
Presented By Niels van Hoorn (NLR - Royal Netherlands Aerospace Centre)
Authored By Niels van Hoorn (NLR - Royal Netherlands Aerospace Centre)Tim Koenis (Royal Netherlands Aerospace Centre (NLR)) Bert de Wit (Royal Netherlands Aerospace Centre (NLR)) Jos Vankan (Royal Netherlands Aerospace Centre (NLR)) Robert Maas (Royal Netherlands Aerospace Centre (NLR)) Ozan Erartsin (Royal Netherlands Aerospace Centre (NLR))
AbstractThe aerospace industry is driving innovation in aircraft technologies and materials to achieve net-zero carbon emissions by 2050. This will require all aspects of the aircraft development, manufacturing, operation and disposal to be scrutinised. To develop fuel-efficient aircraft, integrate net-zero propulsion systems, and to allow for efficient recycling of material, Thermo-Plastic (TP) Carbon Fibre Reinforced Polymer (CFRP) is a promising construction material. Today’s aircraft are typically constructed along assembly lines where components are joined. The new generation of net-zero aircraft will consist of multi-functional building blocks that require novel joining technologies. Furthermore, high volume assembly is key to support acceptance of more complex innovative aircraft concepts. TP-CFRP can be re-melted to allow for (dis-) assembly of aircraft components or sub-components. One technology that supports rapid assembly and disassembly of thermoplastic components is induction welding. Induction welding offers benefits like rapid contactless welding. However, effective heating of Uni-Directional (UD) TP-CFRP components is crucial for achieving a good quality weld but difficult to monitor and control. Furthermore, certification of induction-welded joints poses significant challenges, necessitating a deeper understanding of the welding process. Advanced 3D Finite Element Method (FEM) modelling can enhance this understanding and aid certification. At last, a good understanding and means of monitoring the welding process will support rapid assembly with minimum inspection intervals of the welding process.This paper presents an efficient and accurate FEM approach for induction welding of TP-CFRP laminates, capturing the multi-physics aspects of induction welding through a coupled electro-magnetic-thermal analysis. The induction heating and welding setup is modelled in detail, including a copper coil moving over a weld line with specific speed, distance, and amperage settings. The electromagnetic FE model accounts for the coil, air, and laminate, predicting the magnetic field and subsequent Eddy currents are generated in the conducting carbon fibres. The fibre orientation and interfaces between plies are explicitly modelled, as they significantly influence the formation of Eddy current loops. To enable real-time simulations for predictive purposes, the computationally expensive electromagnetic part of the simulation is replaced with a Machine Learning (ML) approach, more specifically Artificial Neural Network (ANN). The ANN predicts the 3D Joule heating fields in the TP-CFRP adherends, which are then used in the thermal FE model. The thermal model includes the laminate and accounts for natural convection and radiation. Accurate material characterisation is crucial for both the electromagnetic and thermal models. The induction heating model is validated through comparison with representative experiments, showing an accurate match. In this work the FEM approach is verified with physical experiments of static heating of UD TP-CFRP plates and dynamic heating of two plates forming a lap-joint. In addition, the methodology is extended to induction welding of thick UD TP-CFRP laminates up to 8 mm. By combining ML and physics-based modelling, this research enables the simulation-driven design and optimisation and real-time application of induction welding processes for UD TP-CFRP, reducing the need for physical prototyping and testing; and paves the way for a digital twin that can be used to monitor the induction welding process during manufacturing to support high volume assembly lines of innovative net-zero aircraft.
Presented By Sebastian Poulheim (Altair Engineering)
Authored By Sebastian Poulheim (Altair Engineering)Christian Kehrer (Altair Engineering GmbH)
AbstractDigital Twins (DTs), empowered by simulation, and Artificial Intelligence (AI), revolutionize product life cycle management by predicting system behavior, leveraging operational data, and informing strategic decision-making. This paper explores how these technologies synergize to deliver transformative benefits across all life cycle stages, ensuring substantial returns on technological investments.In product development, AI-augmented digital twins enable comprehensive design exploration, evaluating system and sub-system interactions to accelerate optimization and ensure technical requirements are met efficiently. For operational systems, physics- and AI-driven DTs monitor real-world conditions, offering actionable intelligence to enhance overall equipment effectiveness, minimize maintenance costs, and improve operational efficiency.Through practical examples and customer-driven case studies, this work highlights how leading organizations from different industry sectors deploy DTs to inform strategy and push the boundaries of product design and innovation. We will share when and why companies in Heavy Equipment, Industrial Machinery, and Automotive do invest in Digital Twins, offer insights into the challenges faced during implementation and how they were overcome. The specific use cases presented will answer the following questions:• How to increase efficiency of an excavator by +20% thanks to holistically optimizing the bucket shape within two working days?• How to reduce production waste by -15% through improved process capabilities in sheet metal forming?• How an operational Digital Twin for damage evaluation processing sensor data in real-time helps to switch from time-based to load dependent maintenance of a vehicle fleet leading to savings of in a seven digit range? A focus is placed on the core technologies—simulation, AI, and data analytics—that underpin the realization of valuable DTs. Thereby we’ll address the role of different fidelity levels of physics-based simulations, the need for reduced-order modeling to ensure real-time capable models and how AI is helping to close the feedback loop between product design and operation. By leveraging these convergent technologies, DTs provide robust frameworks for design optimization, real-time monitoring, and effective control, driving unparalleled improvements in efficiency and sustainability.This paper serves as a roadmap for leveraging DTs to maximize product performance and operational success, showcasing how digital innovation redefines product and systems management in competitive industries.
Presented By Armin Amindari (Beko)
Authored By Armin Amindari (Beko)Simge Özt ürk (Beko)
AbstractDespite their extensive use in industrial applications through the past decades, modelling rubber-like materials can be very challenging when it comes to solving the system dynamics. Beside their complex geometries, the nonlinear material behavior makes rubber parts behavior unpredictable and increases the uncertainty level of dynamic systems. Due to the material structure, the mechanical response of these partes highly depends on excitation amplitude and the frequency. Besides, due to high elongation capacity the material undergoes large deformations which in turn leads to complex local bucklings and also contact non-linearities.In this study an integrated and hybrid finite elements-multibody dynamics approach has been adopted to optimize the design of a washing machine rubber gasket based on dynamic performance targets. This gasket application is probably one of the most challenging elastomer applications in industry due to its relatively large dimensions and harsh loading conditions in which the material undergoes significant and complicated deformations under varying excitation frequencies up to 25 Hz. Firstly, the elastomer material was characterized in detail to derive a comprehensive visco-hyperelastic constitutive model. Using this material model a non-linear finite element modelling approach has been developed to capture both viscoelastic behavior and also contact non-linearities under dynamic loading scenarios. Secondly, a detailed multibody dynamic model of the system was developed to test the dynamic performance of the system when using alternative gasket designs. Secondly, several random curves were generated to create alternative forms for axisymmetric design of the gasket. Using these forms, ideal axisymmetric designs were created. After evaluation of these designs according to calculated dynamic stiffness values an initial gasket form was created. Next, the dimensions of different segments of the initial form were parameterized. By applying size optimization iterations, an optimum form was created. Using the generated optimum form, a detailed design for the gasket has been developed and the dynamic stiffness behavior of the final design was calculated in three translational and three rotational axes. Using the calculated dynamics stiffness values a nonlinear bushing component was developed for the new gasket. The performance of the new and old designs of the gasket was evaluated in detail using the multibody dynamic model and the deformation levels on the gasket under operating conditions of the dynamic system were calculated. These deformation levels were applied as boundary conditions in finite element simulations and the durability of the design was evaluated in detail and approved numerically. The finalized design was prototyped. Detailed physical tests were carried on and the durability of the gasket design were approved experimentally as well. Additionally, the stability performance tests on the washing machines with the new optimized gasket reveal a remarkable increase of %30 in stability limits unveiling the success of the optimization study. The integrated approach proposed in this study can be adopted by researchers to develop optimum and more reliable gasket designs for dynamic applications.
Presented By Ruofeng Cao (Cranfield University)
Authored By Ruofeng Cao (Cranfield University)Yongle Sun (Cranfield University) Wojciech Suder (Cranfield University) Stewart Williams (Cranfield University)
AbstractWire-based directed energy deposition (DED) additive manufacturing (AM) uses an intense energy source, such as an electric arc, laser, and electron beam, to melt metal wire feedstock, which is deposited layer by layer along a planned path for building a structural part. The wire-based DED AM process is advantageous thanks to its large-scale deposition capacity, high efficiency of material and energy use, and wide applicability to different industrial applications. However, research is still needed to enhance the process productivity and part quality. Wire preheating is a feasible method to significantly enhance deposition rate. It can also help reduce heat input of the energy source, inhibit pore formation and refine grains, thereby enhancing mechanical properties of the deposited part. Induction heating (IH) is a highly controllable non-contact heating method suited for rapidly and precisely preheating the wire feedstock to target temperature. In addition, compared with conventional weld wire preheating methods such as resistance heating, bypass heating and auxiliary arc, IH avoids magnetic blow and is applicable to most metals with flexible set up. However, IH preheating of moving wire feedstock is complicated and underexplored for AM applications. In this study, to understand the complex electromagnetic heating mechanism, a multiphysics finite element model of coupled electromagnetic and thermal fields is developed based on the Eulerian framework, which improves the computational efficiency by 60% compared to the model in Lagrangian framework. On the other hand, in the case of feedstock passing through a stationary magnetic field at a constant wire feed speed, a more efficient steady-state approach is proposed with 90% computational time saving than the transient model. The temperature predictions by the model are validated by thermocouple measurements. Parametric sensitivity analysis was also performed to evaluate a range of coil geometries through the developed efficient model, revealing the effects of different parameters on the preheating temperature and energy consumption to guide AM process optimisation.
Presented By Zhi Wei Lim (NING Research)
Authored By Zhi Wei Lim (NING Research)Kenji Yamamoto (NING Research Pte Ltd)
AbstractThis paper demonstrates the workflow of conducting a simulation of electric vehicle (EV) fire in urban settings using commercial software. This methodology assesses the fluid-structure interaction of flame propagation on urban structures to provide a comprehensive structural assessment. Today, climate change is one of the most pressing global challenges. The rate at which global temperatures are rising have tripled since the 1850s and 2024 is on track to be the hottest year on record. As global warming continues, there is an urgent and unavoidable need to develop strategies to combat climate change.One of these strategies is the shift from internal combustion engine (ICE) vehicles to electric vehicles (EVs). EVs operate on batteries and offer a cleaner alternative to traditional ICE vehicles that emit pollutants into the atmosphere. As such, cities today are undergoing rapid electrification in the field of transportation. However, while efficient, batteries pose significant hazards to both individuals and property. Under mishandling or malfunction, the batteries are susceptible to fires resulting from thermal runaway which is an uncontrollable state that results in the release of heat energy, toxic gases and smoke. Unlike conventional fires, battery fires are exceedingly difficult to extinguish, often leading to prolonged, self-sustaining flames that resist standard firefighting efforts. In extreme cases, thermal runaway can also result in explosions. Unfortunately, there has been an alarming increase in battery related fires, damages and even fatalities in the recent years. As EVs and battery technologies continues to evolve, it becomes necessary for better battery safety measures to be implemented globally. This naturally calls for further study, such as battery safety assessments through simulations. However, existing literature is limited to the fluid domain, usually focusing on the effects of toxic gas and fire propagation. In the dense concrete jungles of the metropolises today, accurate structural effects must be accounted for comprehensive assessment. The commercial computational fluid dynamics (CFD) software FLACS is employed to simulate the dispersion effects and propagation of fire. The unique distributed porosity concept in FLACS allows for effective multiscale modelling. Selected parameters such as temperature and overpressure are tabulated. Subsequently, these will form the inputs for our structural analysis in the Finite Element solver. Geometries representative of urban environments will be subjected to these loads, offering a comprehensive multiphysical assessment of both fluid (e.g. concentration of toxic gas) and structural effects in cities.
Presented By Wouter Dehandschutter (Siemens Industry Software)
Authored By Wouter Dehandschutter (Siemens Industry Software)Juan Manuel Lorenzi (Siemens)Daniel Berger (Siemens)Biplob Dutta (Siemens)
AbstractThanks to the increasing use of simulation in product development and product performance optimization, design engineers are producing large amounts of valuable simulation data that they wish to leverage and exploit efficiently for any future work. This will allow engineers to build on knowledge acquired through prior product development programs to better address new design challenges and achieve the performance targets of new product generations. Moreover, an efficient and reliable reuse of prior simulation data and simulation results also allows for more agile organizations where engineering resource allocations need to be flexibly adapted to and aligned with the rapidly changing market landscape and customer requirements. The cost and delay of re-training and re-skilling of personnel could be greatly reduced by providing efficient access to the available knowledge acquired through all prior simulation activities.In conclusion, our customers are becoming increasingly aware that they have a potential gold mine of historical data, and want reliable and performant means to exploit these data.In this paper, we present our work on how to improve and enrich simulation data management solutions so that novice engineers can maximally benefit from knowledge acquired in prior product generations. The basis for that knowledge is embedded in the wealth of simulation data that have been stored in the data management system. In one approach we demonstrate how prior simulation results can help analysts to speed up troubleshooting and improve quality of their simulation results by automatic retrieval of and comparison with similar simulation results from prior projects. To realize this, dedicated methods extract and capture knowledge from existing simulation result data and use a similarity analysis algorithm to identify which legacy data are suitable as a reference for a new analysis. In a second step, this supporting mechanism is extended with intuitive search capabilities, so that the user can intuitively interact with the system and fine tune the specific design issue for which prior knowledge needs to be gathered. In this stage, additional information is used that complement the information extracted from simulations, in order to improve the provided feedback.
AbstractThis session would be organized and executed by members of the NAFEMS Simulation Governance and Management Working Group. Efforts are underway to coordinate with the Engineering Data Science Working Group as well, with hopes of having a joint session. There is currently a lot of excitement and enthusiasm around how Artificial Intelligence and Machine Learning will impact the way that simulation work is performed. On one end such tools may become useful assistants, aiding the analyst in setting up, post processing, and then documenting simulations using otherwise traditional tools. At the other extreme, AI/ML would replace traditional physics based simulations, producing fast results based upon its training set, with limited or no traceability on how it determined the answer. The simulation governance aspect of these technologies is currently uncertain and troubling. Existing tools for Verification and Validation are generally not fully applicable. New tools need to be developed to evaluate the adequacy of training sets versus the new problem for which results are being requested. Some providers make claims of ability to evaluate error, the robustness and accuracy of these evaluations needs to be studied and proven. Distinctions need to be made between tools that are based on data fitting only, versus ‘physics informed’ models. In this landscape of those promoting the product on one side, and those skeptical of change on the other, the SGMWG will seek to share practical observations on how simulation governance can currently be applied to these new tools, with the expectation that this will form a starting point for the eventual emergence of formal guidance as tools and methods mature. For example, many simulation users are accustomed to creating response surfaces or reduced order models which then become a fast and efficient means to seek out optimum designs. Once a proposed design has been identified, a full fidelity model is run as part of validating the design. In many ways AI/ML models can be compared to a high dimensional response surface. Thus such tools can be an efficient way to rapidly iterate a design, the final version of which would then be subjected to a full fidelity analysis using traditional simulation tools and appropriate VVUQ efforts. Content for this session would consist of one or more of the following: Papers on drawn from the abstracts submitted for the congress, Presentations from SGMWG or EDSWG members on the topic, or a panel discussion with members of the SGMWG and EDSWG members. This will be determined once the SGMWG has had the opportunity to review submitted abstracts for presentations that would complement the theme as well as once working group members are able to confirm travel authorization with their respective employers.
Presented By Simon Mayer (dAIve GmbH)
Authored By Simon Mayer (dAIve GmbH)Alexander Koeppe (PDTec AG)
AbstractThe European automotive industry faces immense pressure to stay competitive amid growing demands for innovation, stricter safety standards, and ambitious sustainability goals. Development cycles remain longer than those of international competitors, delaying market entry. Limited budgets and resources further amplify the need to replace physical prototypes with digital simulations and virtual testing. This requires faster, more accurate simulation workflows without compromising quality.To address these challenges, AI is integrated into CAE workflows to accelerate simulations, improve accuracy, and reduce redundant efforts. The solution relies on three interconnected pillars that streamline the process and demonstrate how AI can be successfully implemented in practice. The first pillar is structured data management, where simulation and test data from tools such as ANSYS, NASTRAN, and Abaqus are centralized into a structured platform. We show how valuable legacy data—often scattered and underutilized—can be accessed, reused, and prepared for AI applications in a structured and automated way. The second pillar involves the use of small, task-specific AI models for pre-screening. Unlike large, generic machine learning models, these smaller, highly focused models predict outcomes based on historical simulation and test data. In our presentation, we will demonstrate how these AI models serve as an intelligent pre-screening mechanism, identifying the most promising design solutions early in the process. This allows engineers to focus computational resources effectively, reducing unnecessary simulations and redundant iterations. Participants will see how straightforward it can be to train and deploy these models with existing tools and workflows. The third pillar focuses on continuous improvement and collaboration. AI-generated predictions, along with the trained models, are integrated back into the data management system, creating a continuous improvement loop. We will showcase how engineers can reuse pre-trained models for new tasks without the need for retraining or additional data preparation. This approach significantly reduces the number of required simulations while fostering collaboration: teams across projects can access validated models, apply them to new challenges, and build upon previous insights. Our practical examples will prove that AI implementation is not a “black box,” but an accessible and tangible solution for engineering teams.In this presentation, we will go beyond describing the solution. We will demonstrate its practical implementation step by step, proving that integrating AI into CAE workflows is not complex or exclusive to experts. By showcasing real-world examples and a concrete workflow, we will highlight how AI tools can be effectively deployed to centralize data, train task-specific models, and create a self-improving simulation ecosystem. Participants will see firsthand how AI empowers engineers to accelerate design processes, optimize resources, and deliver innovative results. The key message: applying AI is achievable and accessible for every team willing to leverage their data effectively. The Impact This AI-driven approach accelerates R&D processes by reducing unnecessary simulations, streamlining workflows, and enabling engineers to focus on high-value tasks like design exploration and system optimization. The result is shorter development cycles, greater resource efficiency, and improved decision-making without compromising quality. While AI does not replace traditional simulations, it serves as a powerful complement, enhancing speed, precision, and collaboration. By the end of the presentation, participants will not only understand the value of AI in CAE but also leave with actionable insights to implement AI-driven workflows within their own teams—proving that AI is not magic but a practical tool for transforming CAE into a strategic enabler of innovation.
Authored & Presented By Leszek Pecyna (The Manufacturing Technology Center)
AbstractIndustrial laser welding is a rapidly emerging technology in the manufacturing sector capable of producing large volumes of high-quality welds in rapid times. Extensive research in laser-based manufacturing focuses on process parameter selection, which is key to ensuring the output quality of welds. Traditionally, to define optimal process parameters, experimental trials are employed prior to manufacturing for production. While accurate, this approach incurs high resource and time costs. Alternatively, multi-physics simulation models can be employed, however, they can also be very time consuming. Novel use of AI modelling for laser welding presents the opportunity to significantly reduce the time and therefore cost of predictive simulation. However, this approach also poses challenges, for example: defining use-case specific data structures, formulating the problem accurately, and addressing the issue of insufficient data for training robust models.To address these challenges, this study proposes a practical framework that considers key process variable (KPV) definition, design of experiments (DOE) for data collection, and a process flow for closed-loop AI-driven simulation. The latter feature, allows for rapidly reconfigurable models based on changes in upstream data. The study also presents the implementation and results of a use-case where AI-driven surrogate model is implemented to replicate the simulation of microstructural changes during the welding process.We propose a comprehensive framework for AI-driven optimisation of laser welding process parameters. The approach utilises simulation data, validated experimentally, to train the proposed AI model. In this paper we show how AI can decrease computational complexity, and therefore the reduce time and cost of physics simulation models.The framework is demonstrated by a Neural Network surrogate model, which replicates the predictive capability of a complex microstructural simulation. Utilising calculated temperature profiles, which have been validated against experimentation, the new AI workflow will predict final grain density and aspect ratio. The AI model is also benchmarked against standard Machine Learning methods to evaluate performance differences and determine the most effective approach. The paper also explores the feasibility of applying Physics-Informed Neural Network (PINN) models, which incorporates partial differential equations into the learning process, enhancing its predictive capabilities, and reducing data quantity requirementsThis approach shows promising potential to significantly reduce the reliance on costly experiments and simulations. By exploring AI-driven modelling, we aim to establish a more efficient pathway for advancing laser welding technology. The closed-loop framework developed in this study lays a solid foundation for future work in the field, with the potential to be applied to other physics-based problems beyond laser welding.
Presented By Zhi Hang Zhang (NING Research)
Authored By Zhi Hang Zhang (NING Research)Weiquan Er (NING Research)
AbstractThis paper presents a new hybrid method using simulation and data analytics for monitoring fatigue damage and estimating remaining useful life of marine vessels. This hybrid method takes reference from the smart functions introduced by the American Bureau of Shipping (ABS) to enhance approaches in structural health monitoring.The analysis of material fatigue is essential in ensuring the reliability of marine vessels such as ships and oil rigs, which undergo consistent cyclic loading from wind and sea waves. Fatigue lifecycles are currently determined based on a large variety of methods ranging from sensor experiments to high-fidelity simulations. The information and data are crucial in informing users to adopt pre-emptive measures to avoid premature structural failure during operation.While the current physics-based finite element (FE) simulation methods are accurate, they are too computationally intensive for continuous health monitoring across all loading combinations of marine vessels in operation. On the other hand, data-driven methods have gained traction due to their speed and versatility, but lack in capturing the required complexity of physical systems.This newly developed hybrid simulation and data-driven health monitoring method therefore aims to provide a continuous fatigue life prediction of the marine vessels in operation. High-fidelity FE simulations under potential sea states were used to generate training data for a surrogate Gaussian process (GP) model. This accelerates predictions of fatigue damage while preserving the underlying physics of the system. By imparting the physics as prior knowledge of the system, the GP model can also provide continuous predictions with appropriate confidence intervals. Based on the level of uncertainty, the GP model can be iteratively refined with additional training data to improve its robustness and effectiveness.In comparison to high-fidelity simulations, this method was able to predict fatigue damage with a maximum discrepancy of 10%, while achieving a speedup of up to 30%. The results successfully demonstrated the possibility and effectiveness of the method in continuous structural health monitoring.
Presented By Ivan Čehil (Koncar Instrument Transformers)
Authored By Ivan Čehil (Koncar Instrument Transformers)Pejo Konjatic (Mechanical Engineering Faculty in Slavonski Brod) Igor Ziger (Koncar Instrument Transformers Inc.) Matija Vedris (Koncar Instrument Transformers Inc.)
AbstractIn the preparation process for laboratory seismic testing of instrument transformers, accurately determining the mechanical properties of all materials used in their key components is particularly important. This becomes even more critical when FEM (Finite Element Method) dynamic analysis is conducted as part of the preparation. Knowing the precise mechanical properties of these materials ensures reliable modeling, allowing for more accurate predictions of the transformer's performance under seismic conditions.Instrument transformers play a crucial role in transformer stations, serving as indispensable components for the reliable operation of power systems. As such, their design must address not only electrical performance requirements but also mechanical robustness. Proper dimensioning of these transformers is essential to withstand the maximum mechanical stresses they may encounter during their operational lifespan, especially in regions prone to seismic activity.To meet the requirements set forth in IEEE 693-2018, thorough preparation is necessary. This involves conducting FEM analyses to determine the natural frequencies, evaluate the maximum stresses in critical areas, and dimension the transformer’s key components, such as the tank of the electromagetic unit and insulators of the capacitor divider. While generic mechanical properties are typically sufficient for isotropic materials, orthotropic materials, like composites, present a unique challenge. Their complex mechanical behavior requires extensive testing to accurately define their properties. In most cases, data provided by composite manufacturers is used to address this gap.This paper presents the seismic testing of two capacitor voltage instrument transformers, performed according to IEEE 693-2018, and analyzes the obtained results. A detailed analysis of the influence mechanical properties of all used materials have on the results of FEM analysis was conducted after the seismic tests. The study investigates the discrepancies observed between the natural frequencies, stresses, and deformations identified in the pre-test FEM analysis and those measured during laboratory testing.By incorporating a range of mechanical properties into a post-test FEM analysis, new results were generated and compared with shake table test outcomes. These comparisons provided critical insights into the underlying causes of the discrepancies and led to a conclusive evaluation of the transformers’ seismic performance.After testing, a DoE (Design of Experiments) analysis of the transformer's natural frequencies was conducted using a wide range of key mechanical properties to identify the causes of the discrepancies between the natural frequencies measured during the test and those predicted by the pre-test FEM analysis. A sensitivity analysis was performed, and a response surface was generated. Presented results provided critical insights into the underlying causes of the discrepancies and led to a conclusive evaluation of the transformers’ seismic performance.
AbstractFinite element analysis (FEA) of additive manufacturing (AM) is a powerful tool for understanding the thermo-mechanical behaviour of materials during the manufacturing process. This understanding can help improve the process and the structural integrity of the manufactured component. However, the inherent extreme thermal gradients, rapid cycles of heating and cooling and microscale nature of the process turns achieving an accurate FEA model into a challenging task.Calibration of an FEA model is often adopted to achieve a high level of fidelity. Experimental data is usually invaluable for that purpose. The driving force behind the mechanical effects in additive manufacturing is the heat source and how it is applied. The correct amount of heat, the speed of the heat source, and the resulting thermal magnitudes and gradients are critical for producing good structural integrity, which is free of voids and poor microstructure.A popular and effective approach for the thermal calibration of additive manufacturing FEA models is to experimentally measure melt pools under varied conditions and to use the data against FEA isotherms. FEA models can then be calibrated to produce the same measured melt pools for a comprehensive range of thermal conditions.In this reported work, a set of around 50 measurements have been carried out on the size of melt pools produced by different combinations of heat source power and speed. The experimental setup comprises a single pass deposited on a small aluminium alloy cuboid by applying laser powder bed fusion (L-PBF). The produced specimens have been cut at locations where steady state conditions are expected, and the melt pool width and height are recorded for each specimen. The data is then processed to calibrate the FEA model throughout the experimental design space. Conclusions are finally made on how thermal functions within the FEA model are formulated to achieve good simulation fidelity.
Presented By John Parry (Siemens Digital Industries Software)
Authored By John Parry (Siemens Digital Industries Software)Tatiana Trebunskikh (Siemens Digital Industries Software)Tim Brodovsky (Siemens Digital Industries Software)
AbstractWe presented our overall automated workflow for power module development at NAFEMS Multiphyics Conference in Singapore, and focused on the EMC aspects of the workflow at the NAFEMS Simulation in Electronics event.This highly technical presentation demonstrates the complete design workflow of a full bridge IGBT module starting with EDA design, electrothermal, thermal, thermomechanical, electromagnetic, optimization and ROM extraction, all of which is performed in a CAD environment. The goal of the optimization is to maximize heat transfer while minimizing pressure drop, inductance, and warpage within the module to enhance its reliability and performance.Particular emphasis is placed on the numerical techniques involved, how the simulation model fidelity can be confirmed and enhanced with transient thermal testing, and a boundary condition independent reduced order model created for use in system and electrothermal simulation solutions. The simulation scope includes Joule heating within the direct bonded copper substrate within the model, the bond wire interconnects and the bus bars. The focus of this study is not the design of the module itself, however, we have included the simulation of the electromagnetic performance, as loop inductance is a parasitic effect that needs to be minimized. Indeed, the design is a trade-off, balancing variations in current density, stray inductance, electrical resistance, thermal resistance, mechanical stress and physical design constraints required for manufacturability.The module is an internal reference design, authored within an EDA environment and transferred into mechanical CAD, preserving its parametric definition and easing design space exploration and performance optimization.The same model can be used for non-linear mechanical stress simulations that use the temperature fields predicted without the need for translation. Wire bond fatigue is a common failure mechanism due to fatigue in the wires from heating and cooling. Design space exploration can involve the simultaneous variation of a large number of input parameters at the same time, which is recommended to ensure that a globally optimum solution can be found. The technique used combines a number of search algorithms to ensure that the scheme does not get stuck at some local optimum. As the number of designs simulated increases, AI is used to accelerate the process where the accuracy is acceptable, with the accuracy increasing as more designs are simulated.The work that will be reported here replicates similar work done in conjunction with a major European automotive OEM, where the module design is authored in mechanical CAD and fully parameterized, so in both cases the starting point for all simulations is a parameterized CAD model.
Presented By Marko Thiele (Scale)
Authored By Marko Thiele (Scale)Kim Schaebe (SCALE GmbH)
AbstractToday, virtual product development is essential in vehicle projects in large automotive groups in order to keep costs down. As part of the CAE development process, the vehicle’s behavior in operational mode must be investigated. It has become common practice to employ multibody dynamics simulations for this task, where the vehicle is treated as composed of various rigid or elastic bodies that can undergo translational or rotational displacement. Multibody dynamics simulation experts then first build up a virtual vehicle model and subsequently let it drive certain maneuvers on various kinds of standardized roads under different driving conditions that apply for all projects.Managing, sharing and collaborating on the related CAE data, CAE methods and CAE processes across a larger number of simulation engineers can, however, become a challenge. It is therefore important to establish means of working together in teams. For this it makes sense to organize simulation data in such a way that common CAE data files as well as certain process-scripts or simulation-methods in general, can be shared with the whole team. To this end, libraries of common CAE files - such as templates, technical components or connectors that define the vehicle, or the roads and driving conditions that are applied to the vehicle - and all types of process-scripts are created and maintained by a small number of experts. They can then be used by all CAE users working on the different car projects. To ensure that this also works regardless of the location of the CAE engineers, special tools are required to exchange data. For certain disciplines of the virtual product development process, e.g. handling crash simulations, simulation data management has successfully been introduced many years ago and has been in use ever since. Other simulation domains such as multibody dynamics simulations face different kinds of challenges. In this presentation, we want to demonstrate how to foster collaboration between multibody dynamics simulation experts using simulation data management tools. We will address the virtual product development process – from vehicle model building to running the simulations and the integration with the preprocessor – and work out concepts that improve effectiveness as well as consistency for those engaged in the workflow.In particular, we will focus on the challenges that we face when introducing a simulation data management system for multibody dynamics simulations: Since in a multibody dynamics simulation, naturally, the model is made up of many individual parts that are each stored in an individual file, the data structure to include in the simulation is extensive and rather complex. Plus, that model has to be combined with a certain road, driving condition and maneuver. An in-depth reproduction of that complex structure in the simulation data management system is key, though, to make full use of the system’s advantages. However, complexity must not be traded for usability and maintainability such that integrating the preprocessor for easily managing the model is also an important aspect to take care of.In our presentation, with an example integration of a multibody dynamics simulation workflow, we will show how to achieve a detailed mapping with the simulation data management system. At the same time, we will demonstrate how we manage to foster interaction with the system among the simulation engineers and thus enhance collaboration.
Presented By Dirk Hartmann (Siemens)
Authored By Dirk Hartmann (Siemens)Robin Bornoff (Siemens) Stefan Gavranovic (Siemens) Dirk Hartmann (Siemens)
AbstractToday's engineering systems, whether in aerospace, automotive, or in other domains, are becoming increasingly complex. Manually setting up and running (multiphysics) 3D-simulations for these complex systems is time-consuming and error-prone. At the same time, the trend towards faster innovation cycles and the need for improved product performance require more excessive design space exploration. As the number of design iterations and thus the number of simulations increase, the effort and expertise required to setup complex simulation becomes a major limiting factor. And this trend will become even more aggravated due to the relative shortage of CAE experts, which prevents leveraging ever increasing available compute power. While in the recent years many investments have been targeted at accelerating solver processing, pre-and post-processing workflows have remained mostly unchanged. Increasing the level of autonomation in CAE workflows offers therefore a major opportunity. The latest advancements in Generative AI have unlocked unprecedented possibilities of automation through autonomous AI Agents. AI agents are autonomous software systems that can perceive their environment, make decisions, and take actions to achieve specific goals. Within this presentation, we demonstrate along two specific examples how such autonomous agents can be tailored to independently perform various tasks associated with the modelling, pre-and post-processing of simulation, and analysis of complex engineering systems. The first example addresses how autonomous AI agents can realize the vision of fully autonomous design validation within Computer Aided Design workflows. We demonstrate such an autonomous design validation workflow on a 3D structural simulation of a jet engine bracket. Starting from the specification given to the designer, the copilot autonomously infers loads, boundary conditions, and material properties from the geometry of the model and any accompanying textual specifications, and executes the simulation. Once the simulation is complete, the results are summarized in a report, providing non-expert users with sufficient information to determine whether the design is viable.The second example demonstrates how Generative AI, specifically their vision capabilities, can be used for CAD part recognition and classification. This in turn is a prerequisite for automated modelling defeaturing and abstractions. Preprocessing times are then substantially reduced, especially when considering massive CAD assemblies. Furthermore, we outline how Knowledge Graph based Simulation Data Management allows to reduce hallucinations of the CAE Agents.The concepts will be demonstrated within the Simcenter Tools along the two concrete examples but allow for a generalization also to other contexts. This approach tremendously reduces the complexity and effort of CAE workflows ultimately allowing non-expert users to perform virtual design validation.
Presented By Sonu Mishra (Dassault Systemes)
Authored By Sonu Mishra (Dassault Systemes)Bhaskar Ramagiri (Dassault Systemes) Nikhil Dhavale (Dassault Systemes) Youngwon Hahn (Dassault Systemes)
AbstractLaser welding is one of the widely used method to connect the tabs to the current collectors within the jellyroll of the lithium-ion battery cells. This welding method provides speed and the precision required for mass manufacturing of the battery cells, modules and packs. This laser welding process supports for joining of dissimilar materials, including welding of thermoplastics by tailoring the laser wavelength according to the material absorption/ transmission spectra.For battery cells, the tabs are typically thin metal strips, often made from metallic materials such as aluminum, copper, or nickel, that act as electrical connectors between the battery cell and external circuit. Ensuring a strong, reliable weld between the tabs and cell is essential, as this connection plays a vital role in the battery’s overall electrical performance, heat dissipation, and structural integrity. Intensity of the laser and heat conduction of the joining materials affects the welding depth, lower welding speeds also leads to higher welding depth. So, these laser process parameters have significant effect on the quality and reliability of the laser welding joint. In this study, a comprehensive numerical analysis is conducted to investigate the influence of laser welding parameter including laser power, welding speed, and spot diameter—were systematically varied to understand their effects on the weld joint’s mechanical strength and thermal profile. The simulation process begins by modeling the laser tool to ensure it accurately generates and delivers the required energy along the predefined welding path. This step involves defining the laser's power, beam profile, focal point, and motion trajectory to replicate real-world energy deposition precisely. Once the laser tool is established, a sequential transient thermal analysis followed by structural simulation is carried out. In this phase, the thermal simulation predicts the heat distribution, temperature evolution, and cooling rates in the material as the laser moves along the path. The results from the thermal simulation are then used as inputs for the structural simulation, which evaluates the mechanical response of the material. This includes analyzing thermal expansion, residual stresses, and potential distortions caused by the heating and cooling cycles. Together, these simulations provide a comprehensive understanding of the welding process and its effects on the material.
Presented By Yongle Sun (Cranfield University)
Authored By Yongle Sun (Cranfield University)Alireza Mosalman Haghighi (Cranfield University) Eloise Eimer (Cranfield University) Stewart Williams (Cranfield University)
AbstractWire arc additive manufacturing (WAAM) holds significant promise for transforming industries such as aerospace and energy, offering large-scale printing capabilities with high deposition rates and exceptional material and energy efficiency. Despite its promise, inconsistent and inferior mechanical properties of WAAM-built parts—often due to defects and unfavourable microstructures under certain process conditions—remain a barrier to its wider adoption in critical engineering applications. Addressing this challenge is crucial to unlocking the full potential of WAAM. Inter-layer mechanical working, such as rolling, has emerged as an effective solution, significantly reducing porosity, refining microstructures, and alleviating residual stresses and distortions. Inter-layer rolling not only enhances the mechanical properties of WAAM-built parts but also enables higher performance and reliability. At the core of these improvements lies rolling-induced deformation, a critical factor in driving or amplifying the beneficial effects. However, understanding and optimising the rolling process require a detailed exploration of the deformation mechanisms, which is difficult to achieve by experimental methods alone due to the complexities of real-time measurement and evolving geometries inherent in WAAM. This study addresses the challenge of simulating rolling-induced large deformation in multi-layer deposition by WAAM and aims to gain deeper insights into the deformation mechanisms and their implications. A novel finite element analysis (FEA) approach is proposed to simulate the large, evolving deformation caused by inter-layer high-pressure rolling during WAAM. This approach overcomes a critical issue in FEA: mesh distortion in large-deformation scenarios. By developing a series of rolling models for each WAAM-deposited layer, the simulation framework uses solution mapping to incorporate the previously deformed geometry, along with stress and strain histories, into the next model with a new updated mesh. This stepwise simulation provides a robust framework to capture the progressive effects of rolling on WAAM-built parts. The capabilities of this approach are demonstrated and validated through its application to aluminium alloy walls, revealing the accumulation and stabilisation of rolling-induced deformations during WAAM. Key insights include the evolution of lateral widening and plastic yielding across layers, as well as the discovery of folding deformation when the contact angle of the deposited layer is large. The increase in layer width after rolling affects the geometric accuracy and the plastic strain and layer folding have significant implications for the final mechanical properties of WAAM-built parts, underscoring the critical role of modelling in advancing understanding and optimisation. The findings from this research represent a step forward in addressing large-deformation modelling challenges in the integrated process of WAAM and mechanical working. Beyond advancing the understanding of the deformation mechanisms, the results would provide guidance for selecting and optimising mechanical working processes to improve WAAM deposition quality. By bridging the gap between experimental limitations and practical implementation, this modelling work contributes to the development of high-performance WAAM parts for demanding engineering applications, further solidifying WAAM’s position as a cornerstone technology for the future of manufacturing.
Authored & Presented By Markus Weinberger (Hexagon Manufacturing Intelligence)
AbstractOngoing changes in the SDM infrastructure, like moving the whole server installation or only components into a Cloud environment, containerization of server infrastructure, SaaS, distributed installations and load dependent scaling of containers are trends for modern SDM installations.The CAE tools used in simulation processes and managed by an SDM system fall into two main categories. On the one hand, there are batch applications, mainly solvers, which are typically running in a well managed centralized HPC environment under the control of a queuing system and do not require manual intervention during execution. On the other hand, there are interactive applications which are typically executed on a local client, in particular pre- and postprocessors as well as user scripts. Many different such applications and tools are used by simulation engineers for their daily work and are used in a very flexible way.While even in a Cloud-based HPC infrastructure the solver setup is almost identical for all users, the client applications on local laptops and workstations are often a challenge – rather than in a centrally managed virtual desktop they are installed in a very heterogeneous way on each client.Like other enterprise systems, modern SDM solutions are usually web-based for a variety of reasons, for example that no local software has to be installed on clients and an easy integration of external process partners is possible. Modern UI technologies also made it possible in recent years to provide users a “desktop-like experience” in web-based environments. However, regarding locally installed client applications same challenges remain – for example that a web browser does not allow the direct modification of files on local disc, which is very difficult to cope with in flexible simulation processes.A modern open SDM solution which provides a user-friendly and flexible yet vendor independent integration of local applications is highly desirable, and should make a potential Cloud deployment of the SDM infrastructure completely transparent or in other words “invisible” to the user. This presentation will discuss related use cases, their challenges, and show corresponding solution approaches.
Authored & Presented By Cheryl Liu (Stryker Orthopaedics)
DescriptionComputational modeling and simulation are playing an increasingly important role in MedTech to accelerate time to market, reduce development cost and enhance patient outcome. Physical testing simulation, manufacturing process simulation as well as patient-specific simulation models enable comprehensive pre-clinical evaluation of devices. However, model credibility needs to be carefully assessed for all models before they are considered suitable for their context of use. In this keynote presentation, Cheryl will discuss how modeling and simulation is being leveraged throughout the lifecycle of orthopaedic implants, surgical instruments, and robotic applications, and how model credibility is established using various comparators including clinical data.
BiographyCheryl Liu is currently the Director of Computer Modeling and Simulation at Stryker Joint Replacement where she is leading a group of modeling and simulation experts working on advanced CM&S applications throughout product lifecycle from how they are designed, manufactured, and tested to how they help improve patient care in the real world. Prior to joining Stryker, Cheryl worked at Dassault Systems Simulia as life sciences industry lead where she worked closely with simulation experts from academia, industry, and regulatory bodies to advance credibility and regulatory acceptance of CM&S. Cheryl received her MS and PhD from University of Notre Dame and BS from Beihang University. Cheryl has been a member of ASME VVUQ40 subcommittee since 2013 and is co-leading the patient-specific model working group. Cheryl is currently serving as implant section Chair-elect for the Orthopaedic Research Society and is on the CM&S steering committee at MDIC.
Authored & Presented By Frank Bauer (BMW)
DescriptionThe demands of different simulation technologies is always increasing, and because of this, the computing power and storage space needed to run these simulations is growing as well. The cause of this rise in compute power is due the detail, and thus size, of the models being simulated as well as an expanding portfolio of use cases to be evaluated. In the automotive crash field, the number of drivetrains has been increasing as well as the amount of safety requirements from legal tests as well as consumer safety tests. Additionally, simulations give us the possibility to check the uncertainties in our car concepts, but which also requires even more simulations to be run.In order to cope with the demands of dealing with such a large amount of simulation data, a framework is presented which is built around central data hub. This framework has many associated processes and workflows which provide the most efficient way to store and process the data as well as enabling process automation and future AI integration. All of these functionalities are becoming more and more important in the current challenging budget and head count climate. This central hub for data management and process automation offers a solution for the aforementioned topics as well as quality assurance, consistent documentation, and model traceability, all of which are necessary to enable future virtual certification procedures.
BiographyFrank Bauer has a background in crash simulation, with topics such as NVH and fatigue strength a recurring focus. Over the years, his main efforts have been on the comprehensive structuring and standardization of crash simulation processes, leading to the establishment of the group at BMW that now primarily focuses on these topics.
10:30
Authored & Presented By Alexandru Macovei (Fokker Aerostructures)
AbstractIn order to show how easy but dangerous your job can be this paper highlights some interesting problems and pitfalls from the day-to-day work of an aeronautical structural engineer. In a world focused on sustainability, low weight EVTOL and small aircrafts might become the future of air transportation. This is pushing the boundaries of design into new corners, so quality assurance needs special attention.As this market grows there is a high demand of fast studies, focused on topology optimization and very light aero structures. The time to market is significantly reduced, such that there is less and less time for checking and QA the FE model. There is a strong belief that automating the process, and especially the checks, can solve the QA problem and improve the process. But a machine can check what we already encountered or thought to be an error. What do we do for a new problem? Are we confident enough to let a machine to explore new problems that end up on an aircraft? It is proposed a short but mandatory checklist and the workflow to be followed in order to make sure that theend result FE model is going to the deliver a good quality and safe answer. It is essential that companies make use of their history and experience when it comes to the quality of the models. That is the starting point, which has to be respected by new people in the market, even if they have a new vision or if they are pressed by the new demanding deadlines. The prerequisites to ensure a good quality of the FE models are: - Building an internal procedure for quality assurance, adapted to the company specific and in accordance with existing guidance (e.g. EASA CM-S-014);- Having a QA Template or checklist;- Building up internal analysis templates for pre/post processing such that QA to be embedded in the process;- Use of SPDM to follow the process and create gateways for Quality checks;- Training of personnel to follow and understand the QA procedures;- Complex models to be accompanied by easy benchmark models that build understanding in thrust in the results;- Continuously develop the engineering judgement and understanding of results as the main tool.The new tools on the market are more user friendly. These enable people to take shortcuts. Very easy and fast, sometimes at the end of a click of a mouse button, the result is on your screen. Pictures or animations that can help engineers to take quick decisions. A complex problem that is solved by complex solvers using a very user friendly interface, means that the user has more time to check and educate himself into the complexity of the problem.
Presented By Stefan Müller (Sidact)
Authored By Stefan Müller (Sidact)Dominik Borsotto (SIDACT GmbH) Vinay Krishnappa (SIDACT GmbH) Nouran Abdelhady (SIDACT GmbH) Kirill Schreiner (SIDACT GmbH) Tobias Weinert (SIDACT GmbH)
AbstractIn order to achieve and fulfill design and crash criteria as a central component of a vehicle development project, the adaptation and redesign of the simulation model is the central task of the engineer. These changes are validated by new simulation runs, which ideally show improved behavior. The challenge here is, on the one hand, to analyze the changes and, on the other hand, to exclude the possibility that the changes have undesired side effects. These effects can either be known but undesirable, or unknown and therefore not yet evaluated in terms of their impact on crash behavior. In both cases, it is the engineer's task to find and document all the effects of the applied changes and, if necessary, adapt the model accordingly.Comparing two or more simulation results is time-consuming. This process can be automated using machine learning. For this purpose, a database is created, for example for a construction kit or a development tree. This database is expanded with each analysis if new crash behavior in terms of deformation or the behavior of a post variable is detected. All behaviors contained in the database thus represent the event horizon against which each new simulation is compared. If a behavior is known but unintentional, this behavior can be tracked. This means that the user is warned for every simulation that exhibits this behavior. The manageability of the database depends heavily on its size. Thanks to data compression techniques and the reduction to the essential components of a crash result, the database only requires a fraction of the original simulation results. For a use case in which the sum of all simulation results amounted to more than 7 TB, the database is only 14 GB, which corresponds to a reduction by a factor of over 500. But what if, for example, you have seen a certain behavior of a component in a test, but this does not match the final simulation result? Or you are interested to know whether a certain crash behavior occurred in the set of all simulations. In this case, a geometric search can be carried out in the database.By applying Model Order Reduction (MOR) the new deformation pattern is projected to the low dimensional latent space of the simulation results available in the database. In the low dimensional space the distances between the deformation pattern of interest and all deformation patters of all simulations results for all time steps are analyzed and a similarity score is determined. This provides an overview of which models and at which time steps the behavior was similar. It is therefore possible to search for early buckling behavior as well as for the final state of a component deformation.Even for data bases containing several hundreds of full vehicle crashes, the time of search is just a few seconds making it an interactive task.
BiographySince 2008, working in the field of “FEMZIP: Data compression for simulation results” from 2008 - 2012 at Fraunhofer SCAI and after the Spin-off at SIDACT GmbH. 2021 PhD in Mathematics: "Compression of an array of similar crash test simulation results" at HU Berlin. Since 05/2024: CTO at SIDACT GmbH.
Presented By Andrew Halfpenny (HBK)
Authored By Andrew Halfpenny (HBK)Cristian Bagni (Hottinger Bruel & Kjaer (HBK)) Stephan Vervoort (Hottinger Bruel & Kjaer (HBK)) Amaury Chabod (Hottinger Bruel & Kjaer (HBK))
AbstractFatigue is the predominant cause of structural failure under cyclic loading conditions. Fatigue failure typically involves two main stages: an initial phase where one or more cracks form (crack initiation stage), followed by a phase where these cracks, if subject to sufficiently high cyclic stress, grow until failure (crack propagation stage). The relative duration of these stages varies based on factors such as material properties, structural design, and application. Furthermore, in some cases, a crack may extend into a low-stress region, halting its progression and preventing failure. In such scenarios, the crack may be considered acceptable in-service, as it does not compromise the component's durability (damage tolerance approach).The term ‘fatigue crack growth’ refers to the propagation (or not) of cracks under cyclic loading. Since the 1950s, extensive research has focused on understanding and characterizing crack propagation under cyclic loading. This includes defining threshold, propagation, and fast fracture regions from both experimental and numerical perspectives, as well as accounting for mean stress effects and crack retardation. Unfortunately, this research is dispersed across numerous scientific publications. Furthermore, common simulation methods often focus on either the initiation or propagation stage, which can lead to inaccurate fatigue life predictions when both stages are significant. This issue is particularly relevant for welded structures, lightweight jointed structures, and lightweight cast components, which are increasingly important for more environmentally sustainable transportation solutions.The aim of this work is to enable a more efficient review and comparison of available crack growth analysis tools to support informed decision-making, by collecting the most relevant fatigue crack growth laws and models into a single document. Additionally, this work introduces a unified fatigue life estimation approach, called the “Total-Life” method, that integrates both the initiation and propagation stages, by combining principles from strain-life and fracture mechanics, and a state-of-the-art multiaxial crack-tip plasticity model to account for mean-stress and overload retardation effects.
Presented By Wilhelm Thunberg (Hitachi Energy)
Authored By Wilhelm Thunberg (Hitachi Energy)Sami Kotilainen (Hitachi Energy)
AbstractThis paper handles the application of modern optimization software in dielectric design development of high voltage circuit breakers (HVCB) and shows how coupling of different simulation types can create more efficient workflows. Given the push to replace the circuit breaker insulation gas SF6 with more eco-efficient solutions, there is a need for high pace development and innovation. This necessitates new methods for HVCB development that enable the rapid finding of an optimal design given a large set of parameters and competing objectives. This multi-objective nature can be related to the varying conditions the HVCB must handle, or to different physical properties, such as mechanical and dielectric. A common challenge is to balance the different objectives and to understand all the inherent trade-offs in the design. The main purpose of this paper is to show a dielectric simulation optimization using the MOGA-II algorithm and compare the workflow to more traditional ones, such as a full factorial search. The comparison criteria include the time required to achieve the optimal design, the dielectric robustness of the “best” found design, and the ability to effectively evaluate the compromise between competing objectives. In addition to the dielectric optimization, a new approach for coupling this workflow to optimization of mechanical properties of the HVCB is shown. This paper also details a new Python-based approach that reduces runtime by keeping simulation software clients active during large optimization runs. Initial findings indicate that the application of optimization algorithms like the MOGA-II gives a quicker route to an optimized design, while also enabling coupling of different optimization categories. As a result, new insights into the inherent objective trade-offs caused by the multi-objective nature in HVCB design can be found. These advancements have the potential to streamline the design process and can contribute to the development of more sustainable and efficient products.
Authored & Presented By Yi Di Boon (TE Connectivity Germany)
AbstractMany plastic connector housings are manufactured through the injection molding process because the process is fast and consistent, enabling high volume production. For large housing parts, the cooling step of the molding process can be very long. In order to increase productivity, it is desirable to reduce the cooling time using technologies such as mold inserts with conformal cooling channels and high thermal conductivity mold inserts. Conformal cooling channels are cooling channels which conform to the shape of the mold cavity. Compared to traditional straight cooling channels, the distance between the coolant in conformal cooling channels and the plastic melt is reduced, thus enabling more efficient heat transfer and faster cooling. High thermal conductivity mold inserts typically consists of core made with a good thermal conductor such as copper and a steel shell to provide strength. They are suitable for the cooling of parts with complex geometries that are difficult to cool with traditional or conformal cooling channels. These mold inserts are usually fabricated using additive manufacturing methods, so their fabrication comes at a cost. The benefit of shorter cooling time needs to be balanced against the extra cost of producing the mold insert and its expected life. In this study, a simulation workflow to evaluate the cooling strategy in the injection molding process is discussed. A case study involving high thermal conductivity mold inserts is presented. Injection molding simulations are used to determine the cooling time reduction that can be achieved using the mold inserts. Subsequently, the pressure and temperature loads are exported from the molding simulations to be used in a fatigue analysis to estimate the expected mold insert life. A cost-benefit analysis can then be carried out for the molding process. The workflow can help to determine the optimum cooling strategy for the injection molding process of different plastic parts.
Authored & Presented By Graham Hill (Caterpillar Inc)
AbstractThe structural simulation team at Perkins Engines' site in Peterborough, UK, helped pioneer the use of finite element techniques in engine simulation applications. Since acquisition by Caterpillar Inc. during the 1990s, the team and its capabilities have continued to grow and flourish. The team has made significant contributions to many engine programmes throughout the Caterpillar business.In the early days, our work required the development of our own software codes, notably for the assessment of crankshaft dynamics, bearing performance and fatigue life prediction. Despite the subsequent proliferation of commercial software, these internally-developed codes are still seen to offer competitive advantage and continue to be developed.Historically, the team's focus was on core engine components, but this has steadily expanded to include ancillary items and integration within complex machinery. From the outset, our work has been under-pinned by detailed process documents that describe preferred methods. This ensures a consistency of approach, giving confidence in our decision-making process. One of our fundamental beliefs is that good simulation, early in the design process, leads to robust products.Since 2020 the team has been assisting the electrification of a number of machine-types through the simulation of battery packs and modules. Despite the obvious differences between engine and battery products, the team has been able to draw upon its accumulated knowledge to offer useful insight from the outset. As we gained experience with battery installations we were able to expand the range of simulations we could offer. In the fullness of time, it is anticipated that some of our newly-gained battery knowledge will be applied to engine simulation.The scope of work has ranged from simple static analysis to complex non-linear static and dynamic analyses that have stretched our capabilities and knowledge to their limits. However, by falling back on our engine knowledge and experience we have been able to maximise the degree of confidence we were able to give our design teams.This presentation discusses how we were able to take decades of learning in one field and apply it to another.
Authored & Presented By Kim Nielsen (Grundfos)
AbstractThermal management plays a critical role in electronics, particularly within Grundfos, where reliable and efficient systems are essential to our product offerings. To address thermal management challenges effectively, simulations are widely recognized as a powerful tool. This paper introduces a comprehensive engineering tool developed by Grundfos that automates and democratizes high fidelity thermal simulations for PCB designers and engineers.The Thermal PCB tool is a web-based solution designed to perform automated thermal simulations of printed circuit boards (PCBs) and advanced thermal component models. Based on the latest trends within web-based user frameworks for app creation and Python packages that enables the usage of simulation software through Python, this tool enables investigations into the thermal performance of electronics and assemblies. Users can easily import the layout PCB file, along with a related power budget containing relevant power budget. With the ability to customize the surrounding thermal conditions, users can build a simplified models using an automated interface.This tool leverages both actual PCB designs and a vast library of complex 3D thermal models, encompassing passive and active components, as well as Power Modules. The thermal models are carefully validated through tests, ensuring a high degree of accuracy in the simulation results. By providing engineers and PCB designers with access to pre-built thermal models, the tool eliminates the need for extensive expertise in simulation software. As a result, a wider range of individuals within the organization can employ thermal simulations without relying solely on simulation experts, thus democratizing the use of this valuable technology.One of the primary benefits of the Thermal PCB tool is its utility in verifying and validating new designs. By generating accurate thermal analysis, it reduces the dependency on physical tests. This not only saves valuable time and resources but also lowers research and development costs. Additionally, the tool accelerates the product development cycle, enabling faster time-to-market for Grundfos. By obtaining critical thermal insights at an early stage, engineers can make informed design decisions, leading to improved product quality.In conclusion, the Thermal PCB tool represents a significant advancement in engineering capabilities by automating and democratizing high fidelity thermal simulations of electronics. By providing PCB designers and engineers with an accessible platform for performing thermal simulations and validating performance, Grundfos enhances product development efficiency, reduces costs, and improves product quality. This tool epitomizes Grundfos' commitment to innovation and technological excellence in the field of thermal management.
DescriptionIn Manufacturing Process Simulation, we use multi-scale, multi-physics models to obtain high-fidelity predictions of the transformations in the material during the production of metal or composite parts. This often requires material data for which there is no standardised way of testing. Also, the material modelling and manufacturing process simulation approaches that exploit these data to predict what will happen during manufacturing, are rarely standardised.The consequence of this for industry is low re-use of material data and modelling: in complex supply chains, material characterisations are often repeated and models often created again from scratch. Worse: opportunities to use simulation are lost because people are not confident re-using existing data and models and therefore simply refrain from using simulation.The objective of this session is to reflect on the current state of standardisation in this particular field, and identify opportunities to improve things. The discussion will be started by four short presentations.The need for standardisation for manufacturing process simulation in the aerospace industrySjoerd van der Veen (Airbus)Due to a lack of standardisation today, material characterisations are often repeated and models often created again from scratch. Worse: opportunities to use simulation are lost because people are not confident in re-using existing data and models and therefore simply refrain from using simulation.Development of Modelling Guidelines for Welding in the Nuclear IndustryPaull Hurell (Amentum), Michael Roy (TWI)Weld modelling guidelines and measurement validation benchmarks are a valuable part of the analyst’s toolkit, to aid good decision making. This presentation describes weld simulation examples for austenitic and ferritic steels, including comparisons with experimental measurements, developed for the civil nuclear industry.Generation, validation and management of material data and models in composites process simulationGoran Fernlund (Convergent Manufacturing Technologies), Martin Roy, Alastair McKee (Convergent Manufacturing Technologies)Process simulation often falls short of its full potential due to the absence of established and efficient methods. To unlock its capabilities, we must transition from 'one-off artistry' to standardized, validated workflows. This presentation explores the current state of composites process simulation, focusing on the generation, validation, and management of material data and models. It also highlights future opportunities and pathways to achieve them.NAFEMS Benchmark Examples of Buckling Occurrence, Residual Stresses and Bending Distortion in Additive ManufacturingYongle Sun (Cranfield), Oliver Found (TWI), Anas Yaghi (TWI)An important part of standardisation procedures is to make available representative benchmark examples to the modelling and simulation community. In this presentation, we provide two benchmark examples that compare experimental measurements and numerical simulations of additive manufacturing processes.
10:50
Presented By Mohamed Besher Baradi (Robert Bosch GmbH)
Authored By Mohamed Besher Baradi (Robert Bosch GmbH)Reto Koehler (Robert Bosch GmbH) Muhammed Atak (Robert Bosch GmbH) Andreas Karl (Robert Bosch GmbH) Andreas Kerst (Robert Bosch GmbH) Lajos Kocsan (Robert Bosch GmbH) Sebastian Fricke (Robert Bosch GmbH) Ulrich Schulmeister (Robert Bosch GmbH)
AbstractRecently, there has been a significant increase in the demand for the systematic integration of Modeling and Simulation (M&S) in development methods as well as in release and design standards and processes. Furthermore, M&S now plays a pivotal role in driving virtualization. With this growing reliance on M&S in engineering decisions, the credibility of M&S is gaining increased importance. This covers the common understanding of credibility approaches, the assessment of the credibility of M&S, and the traceability of M&S data.At NAFEMS World Congress 2023, we presented the Robert Bosch (RB) credibility of simulation framework. This work is an extension of the previous presentation and demonstrates the progress in the framework, focusing mainly on the standardization and scalability of the approach. The RB credibility of simulation framework is designed to establish a common understanding across diverse domains and products, providing a unified generic approach. We highlight the generalized framework approach based on several existing domain-specific approaches. We describe the phases of the framework, addressing the engineering task description, defining the decision consequence, M&S risk assessment, credibility assessment, and making simulation-informed engineering decisions.For M&S risk and credibility assessment, we highlight the updated Verification, Validation and Uncertainty Quantification (VVUQ) credibility activities encapsulated in a Credibility Wheel, which now integrates M&S risk assessment into the overall credibility assessment. This approach aims for a systematic integration of M&S risk and error management into the decision-making process. The analyzed M&S risks provide orientation for the credibility target for each credibility activity. Furthermore, our experiences over the last two years demonstrate the need for tailored credibility assessments for specific application domains.In conclusion, the work outlines the challenges faced in establishing the credibility of the simulation framework and incorporating it within Bosch, a company with a diverse application domain scope. We provide insights into the future direction of the framework to establish the M&S activities as a cross-domain standard, integrating with external standards driven by Prostep smartSE.
Presented By James Imrie (Rescale)
Authored By James Imrie (Rescale)James Imrie (Rescale)
AbstractThe rise of Large Language Models (LLMs) has opened transformative opportunities across industries and engineering simulation processes. This session delves into the innovative use of LLMs in post-processing simulation software output logs, addressing a critical challenge for end users and enterprise organisations alike: extracting actionable insights efficiently.Enterprises today generate terabytes of simulation data daily, and there is an increasing need for automation in retrieving simulation results, classifying errors (e.g., mesh inaccuracies, memory limitations, licensing issues), and preparing AI-ready datasets from trusted simulation outputs. LLMs offer a powerful solution, acting as a simulation co-pilot to automate these tasks with precision. This enables enterprises to troubleshoot workflows more efficiently while reducing manual effort.This paper explores the end-to-end process of deploying local LLMs in simulation workflows, maintaining data privacy and data isolation. It covers key steps, including benchmarking LLM performance, applying continuous integration and deployment (CI/CD) pipelines, and validating customer-specific post-processing use cases. By leveraging these strategies, enterprises can establish a seamless data pipeline that automates error classification and generates insights.Real-world use cases will illustrate the application of LLM-powered tools on industry-standard HPC simulation software such as (but not limited to) Siemens Star-CCM+, Dassault Systèmes Abaqus, and Ansys Fluent. A live demonstration will further highlight the practical benefits, showcasing how organisations can achieve effective and rapid troubleshooting, reduce reliance on support teams, and increase user autonomy. These improvements not only enhance operational efficiency but also contribute to cost-effective workflows through scalable and structured data strategies.Beyond the technical implementation, the paper will discuss the broader implications of integrating LLMs into simulation workflows. Enterprises can position themselves for the future by enabling more intelligent data practices while driving innovation and collaboration across multidisciplinary teams.This session is designed for engineering leaders, IT professionals, and simulation experts looking to enhance their HPC operations through cutting-edge technologies. Attendees will leave with actionable insights on harnessing LLMs to transform simulation data management, reduce inefficiencies, and unlock the full potential of AI in engineering.
Presented By Mathilde Laporte (DLR - Deutsches Zentrum für Luft und Raumfahrt (DLR))
Authored By Mathilde Laporte (DLR - Deutsches Zentrum für Luft und Raumfahrt (DLR))Gerhard Hippmann (Deutsches Zentrum fr Luft und Raumfahrt (DLR))
AbstractThe German Aerospace Center (DLR) develops pioneering rail vehicle concepts that enable improvements in terms of energy and resource efficiency, as well as wear and comfort. For passenger car bodies, the focus is on lightweight construction and new materials. However, the use of lightweight structures due to their stiffness and vibration behavior involves a conflict of objectives with regard to passenger comfort.To meet these challenges, suitable design methods and verification concepts are needed that fully exploit the lightweight potential of the materials used. So far, static equivalent loads are generally used in the design of rail vehicles in accordance with the fatigue strength verification defined in the standards, as for instance EN 12663-1. However, this procedure assumes metallic materials, and it is not suitable for adequately considering the dynamic loads that actually occur during operation. This can lead to unnecessary oversizing of the vehicles, which runs counter to the goal of lightweight construction. We developed then a methodology in which design and strength verification are carried out using dynamic loads from flexible multi-body simulation and fatigue strength calculation. This can pave the way for the application of lightweight construction concepts for railway vehicle car bodies. Our method consists first to perform a model reduction in order to extract the flexible body of our car body vehicle. Then, multi-body simulations are carried out to determine realistic dynamic loads and stresses that occur during operation. The results of this stress analysis are directly used for a fatigue simulation. In addition, aerodynamic loads from CFD simulations are considered for the fatigue analysis. For the last step of our method, the welds are defined directly in the fatigue analysis software and a structural durability analysis is done. This last simulation allows us to detect the first damaged location on the structure and finally adapt it for the required life time.
Presented By Hunor Erdelyi (Siemens Industry Software)
Authored By Hunor Erdelyi (Siemens Industry Software)Ahmed Bayoumy (Siemens Industry Software ULC) Valerio Grazioso (Siemens Industry Software NV) Norman Eng (Siemens Government Technologies) Michael Kokkolaras (McGill University) Roberto Dippolito (Siemens Industry Software NV)
AbstractThe design of complex engineering systems, such as aircraft, gas turbines, or modern cars amongst others presents significant challenges on both technical and project management fronts.Finding an optimal design solution to meet the demanding performance, cost, and safety requirements of such complex engineering systems is not trivial. Traditional monolithic design optimization approaches often struggle to handle the complexity of complex systems and their inherent interdependencies, where various disciplines interact in intricate ways (such as aerodynamics, structures, propulsion, and controls to only name the most obvious systems in case of an aircraft). An appealing approach to solve this challenge is to apply a multidisciplinary design optimization technique, which enables the integration and consideration of multiple disciplines into the design process. However, in practice, complex engineering systems are often decomposed into siloed design teams based on their specific functionality, regulatory requirements, and role in ensuring safety and performance of the system. These teams focus on optimizing their subsystems using isolated tools, making it difficult for systems engineers to integrate these optimizations cohesively and efficiently. This poses an additional challenge that monolithic multidisciplinary design optimization techniques cannot handle.To tackle this problem, the authors propose a distributed multidisciplinary design optimization solution. The proposed solution enables an interactive decomposition of the global problem into separate subproblem studies which can be run in a distributed manner in parallel, while still considering the interactions between the different disciplines involved in the multidisciplinary design optimization problem. The two main elements in the proposed methodology rely on applying Nonhierarchical Analytic Target Cascading for solving the multidisciplinary design optimization problem and a client-server type framework that enables distributing the different disciplinary studies to different client applications, which can be located anywhere around the world. To represent the complete system, including all subsystems and their interdependencies concisely, the authors employ a Design Structure Matrix (DSM), also known as an N-squared (N2) matrix.This approach enables not only to tackle the design challenge of complex engineering systems, but also enables and facilitates the collaboration of separate design teams involved around the world. In this publication, the authors introduce the new solution for distributed multidisciplinary design optimization and demonstrate it on a few selected application cases.
BiographyHunor Erdelyi has been with Siemens Digital Industries Software for fifteen years, working in various research and development roles. He holds a PhD in Mechanical Engineering obtained at Transilvania University of Brașov. Hunor is currently technical product manager for Simcenter HEEDS in the domain of Multi-disciplinary Analysis & Optimization (MDAO).
Presented By Liam McGovern (University of Belfast)
Authored By Liam McGovern (University of Belfast)Shiyong Yan (Blow Moulding Technologies) Gary Menary (Queen& amp amp amp amp amp amp amp amp amp amp , 39 s University Belfast)
AbstractPET bottles, produced at a staggering rate of 1 million per minute, significantly impact plastic waste, energy consumption, and sustainability. Manufacturers face the challenge of minimising material usage while ensuring containers meet performance demands such as top-load and burst resistance. The various process parameters in Stretch Blow Moulding (SBM) interact in complex ways that make it difficult to understand and control the process. This lack of control results in inefficiencies, leading to wastage and poor material distribution. Modern approaches prioritise sustainability by using simulation tools to optimize preform design and process conditions, reducing waste and reliance on trial-and-error methods.A series of forming simulations with different process parameters were conducted to produce virtual PET containers with varying material and modulus distributions. These virtual bottles were subsequently evaluated for empty top-load (ETL) performance, a necessary test to evaluate their ability to withstand stacking forces during filling and transport.A two-stage, bidirectional numerical model was developed to predict the structural performance of a bottle from process parameters, and to determine the optimal process parameters for a desired structural performance. The first stage of the model aimed to understand the influence of process parameters on the material distribution of a bottle. Inputs to the model were the processing parameters, and the target values were material distributions. Following hyperparameter optimization, a relationship was established. Through this intermediary step, there is enhanced interpretability into the manufacturing process.The second stage of the model involved developing the link between material distributions and the top-load performance of the bottle. Gaussian Process Regression (GPR), with the Radial Basis Function (RBF) kernel, was used to model this relationship. Through constrained global optimization, the required processing parameters were determined for a desired ETL performance. This model allows for the identification of optimal material distributions, thereby reducing wasted material, improving manufacturing efficiency, and providing new insights into the SBM process.
BiographyI am a second year PhD student in Queen's University, Belfast (QUB). Having studied Mechanical Engineering at Undergraduate and Master's level at QUB, I am now taking on a PhD project in a collaboration with QUB and Blow Moulding Technologies (BMT). My research is focused on understanding and optimising the Stretch Blow Moulding (SBM) process.
Presented By Ajitkumar Jeyakumar (SimScale)
Authored By Ajitkumar Jeyakumar (SimScale)Ceyhun Sahin (Noesis Solutions NV)
AbstractEfficient thermal management in electric vehicle (EV) batteries is essential for maintaining performance, safety, and longevity of the EV battery. However, traditional approaches rely heavily on analytical methods to calculate heat transfer coefficients, which can be time consuming and inaccurate, given the dynamic nature of variables like the state of charge and ambient temperature. This paper proposes an approach that integrates cloud-based simulations and AI-based surrogate models, to enable predictive insights into battery performance.This paper focuses on a battery pack of an EV to study its thermal response at different operating conditions. High fidelity cloud-native simulations are used to create a design space for the battery temperature and coolant flow heat transfer coefficient at different ambient temperatures and battery state of charges (SOC). Each individual simulation took 45 to 60 minutes, the parallelization capabilities of cloud platform provided a training dataset of 24 parametric cases within 1 hour. The next step focuses on leveraging the data from these simulations to predict instantaneous heat transfer coefficients for electric vehicle (EV) coolant systems. So, the initial simulations, each requiring close to an hour, can be replaced by surrogate models that generate the same results in less than a minute. This speed is critical for enabling the dynamic adjustments required to manage rapid variations in EV battery conditions during operation.In real-world driving conditions, these models can deliver predictions in real-time for dynamic adjustments to manage rapid changes in battery conditions. Machine learning algorithms enable real-time prediction of thermal conditions, crucial for optimizing coolant power based on varying state of charge and ambient temperature, enhancing EV batteries’ longevity and performance. Furthermore, the integration of cloud-based simulations and AI models ensures scalability, supporting the development of increasingly complex battery designs as well as enabling integration into digital twin environments. These advancements mark a significant step toward achieving safer, more efficient, and sustainable energy solutions for a decarbonized future.
Authored & Presented By Leonel Garategaray (INENSIA LLC)
AbstractIn today’s rapidly evolving consumer goods sector, digital transformation plays a pivotal role in driving innovation and optimizing product and process development. This abstract highlight a pioneering initiative aimed at democratizing Simulation and Data Management for a global leader in the food and snack industry, leveraging a commercial SPDM software technology. As part of a broader “Digital First” strategy, the project centers around the development of a Digital Developer Portal (DDP), designed to streamline research and development (R&D) processes by providing seamless access to advanced digital tools via a single, easy-to-use platform.This initiative represents a shift from traditional physical prototyping to a more efficient, virtual-first approach. The DDP allows product developers to begin any new project by utilizing digital simulations to guide decisions related to formulation, processing, and packaging. This digital toolset consolidates simulation data, models, and associated workflows, empowering cross-functional teams to make data-driven decisions with greater speed and accuracy.The DDP integrates a commercial Simulation Process and Data Management (SPDM) platform and a Process Integration & Design Optimization (PIDO) software. Together, these tools create an ecosystem that allows the company to democratize its R&D processes across teams distributed worldwide. The system will enable users to:1. Access a wide range of digital tools from a centralized portal.2. Retrieve simultaneous outputs from multiple tools to improve decision-making.3. Automate workflows, where outputs from one tool feed directly into another.4. Compare various scenarios quickly through automated reports.The introduction of this system is expected to deliver significant business benefits, including:• Simulation Lead Design, ensuring that every developer begins their project by leveraging digital tools, leading to more informed decision-making early in the product development cycle.• Uniting global teams through a single collaborative platform, which enhances workflow management and scenario evaluation across locations.Key components of the DDP include:1. Democratization: The platform enables a broad range of users—from product to packaging and process experts—to access sophisticated tools, models, and data without requiring deep technical knowledge in simulation or advanced technologies like AI/ML.2. Workflow & Automation: The system allows for the creation, versioning, and publishing of digital tools that range from basic calculators to complex simulation solvers. These tools can be combined into automated workflows, which can be executed in a guided, manual way.3. Data Management: An object-oriented structure ensures robust management of inputs, outputs, and associated documentation, providing traceability, version control, and the ability to generate insights through reports and visualizations.4. Traceability & Analysis: The platform provides full traceability of projects, scenarios, data, and workflows, offering analytics capabilities for both business and IT users to compare and report on various outcomes.The SPDM Platform acts as the backbone for secure simulation data management, while PIDO Tool facilitates the automation of workflows and optimization tasks. Together, these platforms enable the client to accelerate its transition from physical to virtual prototyping, driving operational efficiencies, enhancing collaboration, and fostering innovation in product development.This case study illustrates the transformative potential of Simulation Process and Data Management (SPDM) systems in enabling global teams to adopt digital-first approaches, reducing time to market, and improving overall R&D effectiveness in the consumer goods industry.
11:10
Presented By Martin Krammer (Knorr-Bremse SfS GmbH)
Authored By Martin Krammer (Knorr-Bremse SfS GmbH)Martin Benedikt (Virtual Vehicle Research GmbH) Christoph Miethaner (TUV SUD) Frank Guenther (Knorr-Bremse Rail Vehicle Systems) Dominique Morin (Alstom)
AbstractSimulation based methodologies are commonly used in research and industrial engineering. To achieve added-value through simulations, it is vital that these simulations comply with pre-defined quality criteria.For this reason, several simulation quality standards have evolved over time. They emerged out of different engineering needs and aim to formalize simulations according to specific aspects, like modeling, verification, or validation. Simulation technology has become even more capable. On one hand, simulation results are increasingly presented as predominant evidence in engineering decision processes. On the other hand, simulations are more frequently part of the product itself, which is to be engineered. For these reasons, novel simulation quality standards are currently being developed.Both sectors, the automotive industry and the railway industry are facing huge challenges. In automobile engineering, advanced driver assistance systems (ADAS) are advancing in direction of full automated driving (AD). The shift to alternative propulsion systems will transform automobiles to subsystems of the energy system. In railway engineering, development, maintenance and operation activities are subject to continuous digitalization involving railway and infrastructure operators, vehicle manufacturers, and suppliers. In both domains, these efforts will unlock much needed eco-friendly and safe transport capacities for passengers and freight. There is a strong trend towards better vertical integration, i.e. to associate suppliers and manufacturers. Standardization helps all stakeholders to speak the same language. Also national authorities increasingly allow simulation, for various reasons mentioned earlier. This contribution has the potential to assist the simulation community in fulfillment of authority requirements, as standardization and codes of practice are pathways to success.In this context, it is necessary to identify applicable, relevant simulation quality standards, as they all follow very heterogeneous approaches.In this article, we draw the current state of the simulation quality standards landscape for automotive and railway engineering. We show the analysis of a carefully selected list of current and upcoming simulation quality standards by looking at their origins, scopes, objectives, and requirements.This contribution enables the simulation community to make an informed decision when it comes to simulation quality standard selection and deployment. It is intended to provide a scheme for orientation and guidance to all members of the simulation community. This includes simulation and test engineers, researchers, the management, and all decision and policy makers. At this point in time, it is not our intention to match simulation standards to specific simulation problems the community might encounter.To reach these goals, we analysed the simulation quality standards landscape and identified two key issues. First, the simulation quality standards landscape is highly diverse. Standards position themselves at various levels of abstraction and vary from generic to application specific. Second, the simulation quality standards landscape is about to change at this point in time, as the use of simulations increasingly exceeds conventional desktop engineering.For many new applications simulation represents the only feasible method for analysis or prediction, as physical testing is not economic, dangerous, or the number of possible test cases is beyond real world testing. Furthermore, simulation is often becoming an integral part of the application itself (“Digital Twin”). In these cases, simulation helps to optimize the application, improve product quality or quality of service, and enables time and cost savings. As a consequence, the risk of simulations to deliver insufficient results must be considered and handled systematically. For these reasons, already existing standards may not be entirely suitable anymore. Some novel simulation quality standards do consider these issues and are therefore described in this contribution.
BiographyMartin Krammer is an expert for computer simulations at Knorr-Bremse Rail Vehicle Systems, located in Munich, Germany and Moedling, Austria. He studied Telematics engineering at Graz University of Technology and received his Master’s degree in 2010. He spent more than 13 years at the border between academic research and industrial development during his career at Virtual Vehicle Research in Graz, Austria. He received his PhD degree from Graz University of Technology in 2022, after submitting his thesis about the integration of models and real-time systems into simulation environments. Since 2018 he is contributing to several working groups related to standardization of simulation, including the Modelica Association, iNTACS, and the European Committee for Standardization (CEN).
Authored & Presented By Prasad Mandava (Visual Collaboration Technologies)
AbstractAs companies face increasing competition, the need to move faster and become more efficient in their product development processes is critical. As companies move towards zero, or nearly zero physical prototypes, greater emphasis is placed on simulation to take a more strategic role in bringing new products to market. Many of the current simulation processes are unsustainable for reaching this goal.The integration of simulation, SPDM and AI offer a new horizon for increasing both the efficiency and effectiveness of simulation capabilities across the enterprise. This presentation will focus on three areas within these technology domains that, when integrated, provide the basis for digital transformation of the process of simulation.• Extraction and compilation of the most critical simulation data from large data lakes of result files. This process involves the interrogation of heterogeneous simulation models that span the multiple physics of system level product development.• Management and accessibility of targeted simulation results throughout the enterprise. Additionally, the implementation of SPDM streamlines simulation workflows by connecting appropriate simulation activities.• Enabling rapid AI learning by culling and delivering to the AI system targeted simulation results. Once fueled by this information, AI systems can deliver the speed and accuracy needed to drive an exponential increase in the number of simulation use cases needed to create and validate complex product performance.An additional evolving area to support these domains is Rapid Results Review. RRR focuses on adding speed and clarity to the interpretation and impact of multiple simulation results. In each of these areas the presentation will detail the types and flow of the simulation data as it moves from the early stages of product development through to design finalization. Beginning with large scale simulation results the presentation will discuss the type of data being utilized and managed at the primary points of engagement across the product lifecycle. Additional focus will be placed on quantifiable improvements in each area over current and more traditional processes in simulation and simulation management.
Presented By Michael Klein (INTES GmbH)
Authored By Michael Klein (INTES GmbH)Eric Heinemeyer (INTES)
AbstractFor industrial applications fatigue analysis is established as an important addition to strength analysis with finite elements. Based on stress results design decisions are made to improve the performance of parts and assemblies. Damage, the result of the fatigue analysis, significantly extends the knowledge about structural behavior under loads compared to pure stresses. In lightweight construction, this knowledge advantage is crucial. The industrial standard so far is to run fatigue analysis in a separate software from stress analysis software. This error-prone, difficult and time-consuming process slows down the development process. A fundamental change is necessary to improve the process significantly. This necessary breakthrough can only be achieved by a new approach in which fatigue analysis is integrated into the FEM solver. This offers two principal advantages for industrial application. The access is simplified, and the efficiency is raised drastically. The new integration approach makes it effortless for the user to add fatigue analyses to their stress analyses.The integration of the fatigue analysis in the FEM solver enables simplified access. Damage can be calculated and exported as an additional secondary result during stress calculation. Already calculated matrices, the stresses at element gaussian points with the knowledge of element shape function, and stresses at the interior of the structure for accurate stress gradient functions are used directly. The additional effort for material fatigue input is small and can then be reused. Material input is significantly simplified by a SN-curve generator based on the FKM guideline. The SN-curve generator determines the material SN-curve automatically from the inputs, one of the 23 FKM material group numbers, the failure strength and the yield stress. Since not only nodes but also the full element and surface information are available, the local SN-curves can then be calculated internally with the considering influence factors such as the surface finish, roughness factor or boundary layer factor, without any additional user effort. The damage results are determined directly by the general FEM software, the standard industrial process is reduced to starting one single software. Data management of huge stress files is no longer necessary. This speeds up product development, makes it simpler and improves its quality at the same time.The second major advantage of integration are the drastically reduced run time and disc space requirements. The solver data structure is optimized for HPC, a uniform database is used, interfaces are eliminated, and unnecessary or duplicate data storage is completely avoided. The hard disc limit that previously existed for large, finely meshed models is eliminated by automatic “on the fly” stress calculation, which means that only just required stresses are calculated at that moment and not stored. This allows significantly larger industrial models and the run time for fatigue analysis is drastically reduced. The fast run time and the simple input enables fatigue analysis as a standard application for FEM analyses.The improvements and simplification of handling are clearly shown using a practical example of an analysis process and the SN-curve determination. The fast computing-times and low storage space consumption are illustrated using an industrial example of a truck chassis. The integration of fatigue analysis is done in the commercial FEM solver software PERMAS.
BiographyMichael Klein is a senior manager at INTES, Germany responsible for engineering, training, sales, business development, and new methods based on customer requirements. After completing his doctoral thesis on shape optimization based on high order elements in 2000, he started directly at INTES and has worked in various positions there since then. His main areas of interest are optimization contact and fatigue analysis. He used his knowledge in the many industrial application areas with a focus on engines, transmissions and brakes.
Presented By Pratik Upadhyay (Dassault Systemes Deutschland GmbH)
Authored By Pratik Upadhyay (Dassault Systemes Deutschland GmbH)Nick Stoppelkamp (Dassault Systemes Deutschland GmbH) Angel Sinigersky (DS Bulgaria EOOD)
AbstractTraditional structural optimization workflows typically adjust one type of design variable linked to a physical property in the finite element model per optimization run—such as section thickness, material density, or nodal coordinates. When multiple property types need to be modified, optimizations are performed sequentially. This approach requires CAD reconstruction and manual intervention to change the setups between runs, resulting in dependencies on the optimization sequence. Additionally, the optimized design may not reach its full potential due to the inability to harness synergies between design variable types and increased dependency on the optimization sequence.To address these limitations, a novel approach has been developed to enable the simultaneous optimization of multiple design variable types within a single optimization run. By consolidating the process, this method reduces the number of cycles and eliminates the need to do CAD reconstruction or manually change setups between optimization runs, leading to optimized designs with enhanced physical performance. It achieves this by leveraging the relationships between various design variable types in the finite element model. The approach utilizes the agnostic nature of non-linear optimization algorithms to handle multiple design variable types simultaneously. These algorithms require only a vector of design variable values and their corresponding objective function and constraint sensitivities as input. Based on this input, the nonlinear optimization algorithm approximates a convex problem and iteratively calculates new design variable values.Abaqus processes an updated model based on new values determined by Tosca, incorporating the interactions between different types of design variables during sensitivity analysis. The tight Tosca-Abaqus integration further enhances efficiency by enabling simultaneous sensitivity calculations for all design variable types. A runtime performance penalty is avoided by modifying the initial finite element model as opposed to generating a new one each cycle, which allows Abaqus to skip repeated model checks. This integrated workflow supports advanced features, including contact mechanics, material non-linearities, and geometric non-linearities.In collaboration with a major automotive manufacturer as a pilot customer, we demonstrate the effectiveness of this approach through a combined sizing and bead optimization of a car door. By leveraging the relationship between bead stiffeners and section thickness, this approach achieves a greater reduction in mass than traditional methods, without compromising structural integrity.For comparison, an equivalent sequential optimization was conducted. This approach required manual modifications to the setup between optimization runs. Additionally, the sequential method proved less effective in utilizing bead stiffeners, as it could not simultaneously reduce section thickness to satisfy mass constraints. In contrast, the new simultaneous optimization method effectively harnessed the interplay between bead stiffeners and thickness adjustments, leading to superior design outcomes and better run time performance.While demonstrated with section thicknesses and bead heights as design variable types, the same approach is applicable to combinations of other design variable types, such as density (topology optimization) or nodal coordinates (shape optimization). Additionally, various geometric restrictions can be integrated into the workflow to account for manufacturing processes, ensuring its applicability to a wide range of structural optimization challenges.
Authored & Presented By Hyuck-moon Gil (SL Corporation)
AbstractIn the automotive industry, headlamps and rear lamps are critical components that serve as key design elements of a vehicle. Along with their functional roles, these components greatly influence the vehicle's overall aesthetic appeal. Like many other automotive parts, lamps are developed using plastic materials. Due to the inherent properties of plastics, deformations often occur during the injection molding process and may also arise during subsequent production steps, such as the assembly of individual components. These shape deformations can result in dimensional issues such as gaps and misalignments in the final product. In addition to causing unpleasant noise or water leakage, these problems can lead to a gradual decline in the overall quality and performance of the vehicle over time.This study introduces an analytical approach to predicting deformations that may occur during the production process of automotive lamps at the design stage. For this purpose, injection molding analysis results were utilized, and structural analysis techniques were applied to simulate and predict the assembly deformation of the lens and housing. Specifically, deformation amounts for the lens and housing were first estimated through injection molding analysis, and the deformed results were mapped onto the structural analysis model. Subsequently, the fastening process of the two components was simulated to predict the deformation state after assembly.Additionally, a technique was developed to analyze the locations and magnitudes of Gap and Flush issues related to dimensional deformations in the final lamp product. The predicted values from the analysis were compared with actual dimensional measurements of the physical product to validate the accuracy of the method.By leveraging the developed assembly deformation prediction technology, it is now possible to address and improve Gap and Flush issues at the design stage, rather than during the production phase. The developed technology is expected to significantly reduce the development time and costs required to address dimensional issues such as Gap and Flush that may occur in the automotive lamp manufacturing process.
BiographyHyuck-moon Gil received his Ph.D. in Mechanical Engineering from Kyungpook National University in 2015. He is currently an Professional Engineer at SL Corp. where he specializes in structural analysis. He research interests include finite element analysis, multi-dynamic simulation and optimization of manufacturing processes.
Authored & Presented By Eric Link (Siemens Industry Software GmbH)
AbstractIn this presentation, we present a comprehensive overview of the workflow involved in battery simulation, detailing the various domains that contribute to a holistic understanding and management of battery systems. The simulation workflow integrates multi-use equivalent circuit models (ECMs) that serve as the foundational framework for battery behavior representation. These ECMs are characterized through a robust methodology that ensures accuracy and reliability across different usage scenarios.The core of the discussion revolves around the synergies between different simulation domains, particularly focusing on their collective role in enhancing battery thermal management. Effective thermal management is critical for maintaining battery performance and longevity, and our approach leverages the strengths of each simulation domain to address this challenge comprehensively.Another focus is model order reduction (MOR), distinguishing between classical techniques and physical model order reduction methods. Classical MOR techniques, which have been widely used in engineering simulations, simplify complex models by reducing their computational complexity while retaining essential characteristics. However, physical model order reduction methods go a step further by incorporating domain-specific insights, which lead to even more efficient and accurate simulations.Furthermore, we explore the integration of artificial intelligence (AI) methods to accelerate simulation studies. AI techniques, particularly machine learning algorithms, have shown great promise in reducing the time required for simulation without compromising on the fidelity of the results. By training models on extensive datasets, AI can predict battery behavior under various conditions, thus significantly accelerating the simulation process.The importance of combining classical and modern approaches to optimize battery simulations is a key message of the presentation. The interplay between equivalent circuit models, thermal management synergies, model order reduction techniques, and AI methods creates a robust framework that can address the multifaceted challenges of battery simulation. Through this integrated approach, we demonstrate how simulation studies can be made more efficient, paving the way for the development of advanced battery technologies.
Presented By Bruce Webster (Novus Nexus)
Authored By Bruce Webster (Novus Nexus)Davis Evans (Novus Nexus, Inc.) Karlheinz Peters (Novus Nexus, Inc.)
AbstractThe successful democratization of Engineering Simulation hinges on making simulation data and information accessible throughout the entire product and process lifecycle. This ensures that all relevant stakeholders can make informed design decisions efficiently. According to NAFEMS:“Democratization of Engineering Simulation (DOES) means a significant expansion in the use of Engineering Simulation by all users in a reliable way, for whom access to the power of Engineering Simulation would be beneficial.”This paper explores two key technologies essential for effective DOES:1. CAD-Embedded Simulation (SimCAD or CAE CAD) – Simulation that uses directly without modification CAD models for simulation.2. Abstract Modeling – An automated CAE processing approach that defines simulations as data models, associating simulation attributes with abstract geometric objects.These technologies enable a critical expertise shift in the production of Engineering Simulation Data. Instead of relying on both CAD engineers and CAE analysts to generate and refine simulation models, the workflow is restructured as follows:-CAD engineers, whose expertise lies in defining product geometry using CAD software, now inherently generate simulation-ready CAD models as the first step to their existing CAD modeling process. This eliminates the traditional reliance on CAE analysts for geometry cleanup and preprocessing. This approach requires adding detailed features only to the final selected SimCAD model that will be manufactured. -CAE automation agents, leveraging Abstract Simulation Data Models, take over the processing of simulation data. Instead of requiring CAE analysts to manually set up simulations, these automation agents reuse the abstract simulation data model to assign the correct simulation behavior to the SimCAD model. They then process and run the simulation, automatically generating the Rapid CAE Report—placing it ‘on the shelf’ for consumption. This eliminates dependency on the scarce resource of CAE analysts.This expertise shift ensures that the production of Engineering Simulation Data is more efficient, cost-effective, and seamlessly integrated into the design workflow, allowing simulation insights to be readily available throughout the product lifecycle.To illustrate this transformation, the paper presents an analogy between engineering simulation and an e-commerce model. Here:-Producers: CAD engineers and CAE automation tools generate simulation data.-Consumers: Enterprise decision-makers utilize simulation information for design and manufacturing. -E-commerce: A digital Simulation Product Performance Marketplace, built on Simulation Process and Data Management (SPDM) with AI, enabling consumers to make informed product decisions.The discussion will compare the efficiency and cost-effectiveness of this approach against traditional workflows, which rely on both CAD engineers and CAE analysts for simulation data production. A detailed analysis of billable hours and calendar time required to make simulation data available with and without these key technologies will be provided.Additionally, the paper highlights necessary expertise and organizational shifts, specifically:-Transitioning geometry cleanup tasks from CAE preprocessors back to CAD engineers using CAD software.-Refining New Product Introduction (NPI) workflows, where CAD design efforts focus on creating geometry only at the level necessary for CAE simulations, detailing only the final validated design for manufacturing.These shifts will significantly reduce CAE geometry cleanup efforts, minimizing costs and time while improving the availability of simulation data. A practical comparison between current CAE preprocessing methods and a democratized CAD-based approach will be presented.Finally, the paper will briefly explore the role of automation agents (CAE automation processors) alongside PLM, CAD PDM, and SPDM software in building a fully integrated, e-commerce-like platform for democratized Engineering Simulation.
11:30
AbstractThe increasing reliance on simulation models in engineering design and validation presents both opportunities and challenges. While simulations offer the potential to reduce costs, accelerate development timelines, and optimize designs, their credibility must be rigorously established to ensure reliable and actionable insights. This is especially critical in industries such as aerospace, defense, automotive, and energy, where safety and performance are paramount, and testing opportunities are often constrained by time or resources. This presentation outlines a comprehensive "brick-by-brick" approach to building simulation model credibility, drawing on real-world case studies and advanced methodologies.We begin by addressing the fundamental question: what does it mean for a simulation model to be credible? Credibility involves demonstrating that a model accurately represents the physical system it aims to simulate, across multiple dimensions such as geometry, material behavior, boundary conditions, and operational loads. To achieve this, we leverage the Predictive Capability Maturity Model (PCMM), a framework developed to assess and enhance the maturity of computational modeling efforts. The PCMM provides a structured methodology for setting goals, identifying improvement areas, and aligning simulation activities with experimental data in a systematic manner.The presentation explores some of the key “bricks” of the PCMM framework, including- Representation and Geometric Fidelity: Ensuring the model accurately captures the physical structure’s geometry, using case studies like lattice structures at IRT Saint-Exupéry, where complex geometries required novel approaches to boundary condition management and measurement.- Physics and Material Model Fidelity: Demonstrating how material properties and physical interactions are represented accurately. This includes the use of finite element model updating (FEMU) to calibrate material models from experimental data.- Validation via Experimental Comparison: Illustrating how simulation results are compared against physical tests, using advanced instrumentation like multi-camera Digital Image Correlation (DIC) systems to capture high-fidelity strain and displacement data.Each of these components is supported by examples from industrial practice. For instance, the collaboration with ArianeGroup on dual launch structures demonstrates how test-simulation data fusion can validate and enhance structural performance predictions, reducing reliance on exhaustive physical testing while increasing confidence in critical systems. The presentation also delves into tools and technologies such as Digital Image Correlation (DIC) and photogrammetry, which provide precise, spatially dense experimental data to align with simulation models.We conclude by discussing the broader implications of adopting a maturity-based framework like PCMM. Industrial implementations of these concepts enable engineers and managers to make evidence-based decisions about testing policies, prioritize model improvements, and establish clear internal benchmarks for simulation credibility. By integrating simulation and experimental data in a structured manner, organizations can achieve higher levels of model maturity, reduce development risks, and accelerate innovation cycles. This approach provides a roadmap for engineers and researchers seeking to bridge the gap between computational and physical domains, fostering more reliable and effective engineering solutions.
Authored & Presented By Christopher Woll (GNS Systems)
AbstractIn a landscape where market expectations and competitive offerings are constantly evolving, systems that integrate and contextualize multiple data sources and deliver actionable insights are highly valuable. Closing the gap between external market data and internal data provides companies in every industry company with a strategic asset that enables them to bring products to market faster.Exploiting this potential is also the idea behind the Data Context Hub (DCH), which was developed as a research project for over six years at Virtual Vehicle Research (Graz) together with automotive and rail OEMs. The platform brings information from R&D and production data sources together as well as from telemetry data streams or storage locations. The DCH creates an explorable context map in the form of a knowledge graph from area-specific data models. These are essential for streamlining processes, reducing risks and identifying new opportunities in data-driven development. The use of state-of-the-art AI models also supports developers in gaining deeper insights from data, predicting trends and automating tasks.Especially in the virtual product development process the use of contextual graph databases opens an important approach for the implementation of artificial intelligence methods in technical use cases. As an example, we consider a crash simulation, where the develop solution creates the necessary link between internal crash results and the standards like NCAP or IIHS on which the assessment of crash safety is based. The approach here is that by constantly comparing the internal used crash values with the official values at all times, it is ensured that no outdated values from outdated protocols are used. Changes in the officially defined values and standards are already recorded before the crash tests. They are already included in the test simulations for the structure, materials and restraint systems of innovative vehicles, for example, at an early stage before the evaluation of crash safety.In the concrete example of crash simulations, the contextual graph databases can be used to immediately recognize which specific values and standards form the basis for the evaluation of crash safety in the event of changes to components on a vehicle. The contextual graph databases are done via an internal engine for context creation, which creates factually linked data points and visualizes them via a comprehensible path within the graph. To interpret these results, generative AI models such as Large Language Models (LLM) can be used to provide users with more precise answers about the data sources and relationships. In the example mentioned, the contextual graph databases can be used to quickly determine which crash test simulations need to be rerun to check the system response and reevaluate crash safety performance. As a result, only these necessary crash test simulations can be carried out on the available resources, which saves companies development time and costs in the long term.Linking internal data sources with external data sources enables companies to gain differentiated insights that were previously inaccessible. DCH's contextual graph databases can provide precise, relevant answers to specific user queries in specific data contexts through the clever use of state-of-the-art AI models. This enables users to better understand the answers and make faster decisions.
Presented By Božo Damjanović (Numikon)
Authored By Božo Damjanović (Numikon)Pejo Konjatic (Mechanical Engineering Faculty in Slavonski Brod) Marko Katinic (Mechanical Engineering Faculty in Slavonski Brod) Zdravko Ivancic (Numikon Ltd)
AbstractEnsuring the structural integrity of piping systems is crucial in industrial operations to prevent catastrophic failures and minimize shutdown time in piping system operations. This study focuses on a transportation-damaged pipe exposed to high-temperature conditions and cyclic loading, providing challenges in real-world scenarios. This study aims to evaluate the service life of a transportation-damaged pipe that is intended to be part of a hot piping system subjected to 22,000 operational cycles under two daily charge and discharge conditions. The flaw size in the damaged pipe was determined based on a failure assessment procedure, ensuring a conservative and reliable approach. Stress Intensity Factor and Plastic Limit Pressure, essential parameters for a failure assessment procedure, were computed numerically using Finite Element Analysis (FEA) and validated against available analytical solutions. FEA was employed to provide a detailed crack behavior under operational stresses. For fatigue crack growth evaluation, the Paris crack growth law was applied. Numerical crack propagation was simulated using Ansys's S.M.A.R.T. crack growth module, demonstrating excellent alignment with analytical predictions based on Paris law. A Failure Assessment Diagram (FAD) was used to assess service life, incorporating constant working pressure and fracture toughness while considering evolving crack size during propagation. Material properties at operating temperature, including elastic modulus, yield strength, and tensile strength, were used to ensure realistic FAD curves. The comparative analysis revealed a good alignment between numerical and analytical solutions, with analytical approaches offering a conservative and time-efficient alternative to detailed FEA. The results highlight that numerical solution predicted approximately 8,500 additional cycles before fracture through the ligament compared to analytical methods. The findings demonstrate that although analytical methods are conservative, they offer significant advantages in terms of efficiency and safety, making them ideal for initial evaluations of damaged components under cyclic loading. These results are crucial for engineering practice, providing practical tools for integrity assessments of damaged industrial and power piping systems while ensuring reliability and performance in critical applications.
Presented By Anton Jurinic (Simulia Scandinavia)
Authored By Anton Jurinic (Simulia Scandinavia)Claus B.W. Pedersen (Dassault Systemes) Jason Action (Lockheed Martin Aeronautics) Benjamin Gajus (Lockheed Martin Aeronautics) Clay McElwain (Lockheed Martin Aeronautics)
AbstractNon-parametric stiffness, strength and/or weight optimization of nonlinear structures is challenging due to the many various potential physical and numerical instabilities for the primal solutions during the optimization iterations. A common approach is to simplify the physics and ignore the non-linear effects of the realistic physical modeling. However, such simplifications often fails when the linear optimized structures are validated using full non-linear modeling including large deformations, imperfections, elastic-plastic material, large sliding contact with friction etc. This potentially leads to additional required redesign work and lost time, and a high likelihood for sub-optimal designs. There is a tradeoff between more time spent in understanding physics, setting up nonlinear analysis, running longer optimizations versus simplified physics with faster optimizations but at the risk of less accurate results which potentially fail in validation.We will demonstrate for a number of applications that sensitivity based optimization approaches using various recent stabilization techniques can successfully address optimizations where severe non-linear phenomenon are present in the structural modeling. The stabilization techniques are utilized depending upon problem type and degree of nonlinearity. These stabilization techniques are independent from each other and can be applied separately or in various combinations.Initially, implicit dynamic procedure can be used in dynamic or quasi-static events for the primal solution modeling and corresponding adjoint sensitivities. The inertia of the system acts as an additional damping procedure for the primal solution.Secondly, a hyper-elastic stabilization scheme has been implemented to address artificial local buckling due to numerical singularities in geometrical nonlinear modeling caused by distorted void elements in topology optimization. The implementation adds artificial stiffness in a non-intrusive manner and only affects otherwise numerical unstable regions which normally cause the primal solution to fail.Thirdly, an artificial added stabilization force scheme in post-buckling region for force-driven problems is implemented. This ensures a full solution for a global buckling analysis being force driven which would often be numerical unstable in post-buckling range. Finally, it is also found that using imperfections help stabilize the optimization iteration convergence as well as ensure that design is robust.Three different nonlinear structural optimization cases are used as demonstrators of above described techniques covering Topology-, Sizing- and Shape optimization. Each problem contains different types of nonlinearities including large deformations, elastic-plastic material constitutive behavior and large sliding frictional contact. In each case a successful optimization has resulted in improved performance of the nonlinear structures.
Presented By Atmane Thelib (Fabrication Saharien Preform)
Authored By Atmane Thelib (Fabrication Saharien Preform)Miloud Zellouf (University Mohammed Khider Biskra)
AbstractThe injection molding process for PET (Polyethylene Terephthalate) preforms is vital to producing high-quality packaging materials. However, manufacturers (e;g: Engel global, Husky Technologies, ..) often face challenges such as uneven cavity filling, inefficient cooling, and defects like warping, sink marks, or voids. Addressing these issues traditionally involves extensive trial-and-error experiments, which can be both costly and time-consuming. This study explores computational fluid dynamics (CFD) modeling as a modern alternative to overcome these limitations, providing a deeper understanding of the molding process and guiding optimization efforts. Through advanced simulations, the study investigates critical aspects of preform production, including melt flow dynamics, pressure distribution, thermal gradients, and solidification behavior. These factors are essential for ensuring consistent product quality while reducing cycle times. By examining how mold design, gate placement, injection speed, and cooling configurations influence the process, the research identifies strategies to minimize defects and improve efficiency. One significant finding is the importance of cooling optimization in achieving uniform heat dissipation. Proper cooling ensures even solidification, which directly affects preform strength and appearance. Similarly, the study reveals how balanced gate designs improve material flow, reducing localized overheating and material degradation. These insights highlight the potential of computational tools to predict process outcomes under various conditions, eliminating the need for costly physical prototypes. This work also emphasizes the environmental and economic benefits of computational simulations. By reducing energy consumption, material waste, and experimental iterations, manufacturers can achieve more sustainable production practices. The results contribute to bridging a significant research gap in PET preform molding, offering a practical framework for improving process efficiency and precision. In summary, this study demonstrates how computational methods can revolutionize PET injection molding by addressing long-standing challenges with precision and efficiency. These tools empower manufacturers to deliver consistent, defect-free preforms while adapting to the evolving demands of global markets. The ability to simulate and optimize every aspect of the process represents a paradigm shift, transforming injection molding into a faster, more cost-effective, and environmentally responsible manufacturing solution. This work not only bridges a critical research gap but also sets the stage for further innovations in polymer processing and sustainable industrial practices.
Authored By Paul Mc Grath (Neural Concept Ltd.)Andreas Kemle (MAHLE GmbH)
AbstractIn today’s competitive engineering landscape, organizations face immense pressure to deliver high-performing, energy-efficient products at an accelerated pace. This challenge is particularly prominent in the electric vehicle (EV) industry, where efficient battery thermal management systems are critical for both performance and longevity. Design teams must innovate rapidly, navigating complex trade-offs between performance and cost while simultaneously adhering to manufacturing constraints and stringent timelines. However, traditional design processes for battery cold plates often fall short, hindered by lengthy iteration cycles and limited automation. To address these challenges, engineering intelligence (EI) provides a transformative solution by combining generative design, real-time feedback, and historical data utilization to significantly reduce development times and enhance product quality.Neural Concept, in collaboration with MAHLE, has redefined this process by leveraging Neural Concept’s advanced engineering intelligence platform. This workflow dramatically accelerates the design process, enabling swift response to evolving customer requirements (such as package or operating conditions) with optimized designs delivered in exceptionally short timeframes. Designers can now explore hundreds of potential configurations, identifying optimal trade-offs between performance and cost with a comprehensive 360° view of the design problem including live insights on manufacturing constraints, packaging, weight and performance. By integrating advanced automatic design optimization techniques, the solution ensures the designs reach the edge of achievable performance, clearing doubts and delivering designs that combine peak thermal efficiency and minimized pressure drop.In a case study with MAHLE, this workflow demonstrated remarkable outcomes. Despite MAHLE already utilizing a streamlined development tool chain to reduce friction between the design and the simulation tasks, the implementation of Neural Concept’s new tool in a development project with a challenging timeline led to further improvements. It resulted in a 10% reduction in pressure drop without compromising thermal performance compared to a traditional design. In another design concept, Neural Concept’s solution reduced the temperature delta in the battery pack by 10% while maintaining the same pressure drop as a conventional design. These outcomes highlight the transformative potential of Neural Concept’s platform to enhance competitiveness in the EV industry, setting a new standard for battery cold plate design by eliminating inefficiencies and empowering teams with cutting-edge technology.This case study exemplifies how engineering intelligence empowers teams to overcome the limitations of conventional tools, enabling rapid iteration, improved performance, and enhanced competitiveness in the EV market. By fostering collaboration and integrating cutting-edge AI techniques, this partnership demonstrates the potential of next-generation workflows to redefine industry standards.
Presented By Tobias Gloesslein (Esteco Software)
Authored By Tobias Gloesslein (Esteco Software)Kay Schmidt (Cummins Deutschland GmbH) Sohanur Rahman (Cummins Deutschland GmbH) Saurabh Sharma (ESTECO Software GmbH)
AbstractDuring a product's regular Engineering and Current Product Support (CPS) Lifecycle, situations appear where quick and reliable decisions need to be made. Simulation can be a helpful tool to support the decision-making process and provide important information that can influence decision quality. In many engineering environments, manual processes must be executed and accompany the evaluation of numerical results. Usually, a work request system exists that includes creating process input and output documents, reviews of tasks and results, and a queue for the analysis work to be executed. The execution of such a process is usually time-stretching to deliver results and influences decision-making regarding urgent tasks that usually need a decision within a few hours or days. Highly standardised simulation work can hence be automated and sequentially executed, even if different simulation disciplines or tools are involved, using suitable Multi-Disciplinary Optimization (MDO) and Simulation Process and Data Management (SPDM) Software Tools. A workflow prepared in this manner can be executed quickly even by a non-expert user and is only limited by the simulation runtimes. It allows for delivering results quickly and increases the chances of influencing the decision-making process in urgent engineering tasks. An example of such a process is presented, focusing on the analytical standard work carried out on the membrane of a Diesel Exhaust Fluid (DEF) pump supplying flow in a Selective Catalytic Reduction (SCR) dosing system.Here, the membrane performance is evaluated by analysing specific properties, including the stress condition, strain hot spots, and the final pump flow performance. A FEA simulation model is utilised to analyse the structural behaviour. Another FEA simulation is set up to investigate the pump flow performance, which obtains the volume due to membrane displacement as a function of piston stroke and fluid pressure. This data is then plugged into a 1D system simulation of the pump, driven with varying rotational speeds, to obtain the final flow condition.The first step towards successfully implementing the concept was parameterising the CAD model of the membrane geometry. Afterwards, the FEA simulation models were parameterised based on the most relevant inputs and outputs and automated with Python scripts for different operations (e.g., result calculation, data transfer, and generating and exporting required contour plots and animations). The next step was integrating the simulation, pre-processing, and post-processing tools in a single environment using an MDO tool. Three workflows were created to achieve this: two investigation-specific (structural behaviour and flow condition) back-end workflows and a front-end workflow nesting the back-end workflows. The users interact with the front-end workflow while the back-end workflows conduct all the necessary operations, ensuring simplicity. A SPDM tool deployed on Cummin's intranet was then used to democratise the simulation workflows along with their dependencies (e.g., scripts and simulation files). Since multiple users were planned to execute the workflows, their versioning and traceability could also be maintained in the SPDM tool. It also enabled the sharing of computational resources by allowing users to run the process on different workstations and the in-house HPC. Since the workflows were implemented as a team project, anyone in the team can now run the workflows. This user interface is accessible via a web browser on the intranet, and anyone with the granted access can run the front-end workflow. The post-processed results are also made available to users on their web browsers to enable them to conduct their necessary investigations.
BiographyTobias Gloesslein is the Team Leader of the Engineering & Support team at ESTECO Software GmbH in Nürnberg, Germany. He joined ESTECO as an Application Engineer in 2020. Prior to this, he graduated from Lafayette College in Easton, Pennsylvania, with a Bachelor of Science in Mechanical Engineering in 2018 and then went on to receive his Master’s degree in Product Development and Systems Design from the University of Applied Sciences Würzburg-Schweinfurt in 2020.
13:10
Presented By David Heiny (SimScale)
Authored By David Heiny (SimScale)Richard Szoeke-Schuller (SimScale GmbH) Naghman Khan (SimScale GmbH)
AbstractEngineering teams are leveraging cloud-native CAE tools to obtain faster insights from their product development and rapidly develop competitive products. Designers have access to robust end-to-end design, simulation and optimization tools in a web browser leading to the advent of the all-cloud engineering software stack. Many legacy solvers however, have not migrated or have been too slow to migrate to the cloud, slowing adoption and constraining their usage and benefit to engineers. The demand for high-performance, scalable, and easily accessible simulation tools is rapidly increasing. While established solvers such as Marc, an industry-standard for nonlinear and multiphysics simulations, offer powerful and mature capabilities for product manufacturers, their traditional desktop-based deployment often presents barriers related to hardware constraints, licensing complexity, and limited accessibility. To address these challenges, this paper explores the integration of traditional solvers into cloud-native simulation platforms, effectively unlocking trusted and advanced features for a broader audience of engineers worldwide.Cloud-native CAE infrastructure embraces the principles of containerization, microservices, and scalable compute resources, making it possible to deploy sophisticated solvers in a fully virtualized environment. This approach mitigates the need for specialized local hardware, reduces setup time, and simplifies access to extensive nonlinear analysis capabilities, including large deformation, contact, and material nonlinearity. The authors will demonstrate how traditional solvers can be rapidly deployed to the cloud highlighting the challenges of transitioning a legacy desktop application to a cloud-native ecosystem without compromising solver performance. Furthermore, we demonstrate how cloud-native deployment enables parallel and on-demand simulation, vastly improving the scalability and responsiveness of traditional solvers. This approach also facilitates organizational change where simulation experts become enablers for designers running complex physics cases with confidence and monitored through collaboration and SPDM features. An end user case study is presented that illustrates this successful deployment, showcasing enhanced simulation workflows for complex industrial applications in the automotive sector. These include scenarios such as nonlinear contacts in mechanical fastenings and connectors, high temperature / pressure gradient applications across components and coupled fluid-thermal-structural modeling. Additionally, the accessibility benefits of cloud deployment are emphasized, enabling collaborative, global usage without the need for expensive on-premises hardware or complex IT infrastructure. We show evidence where the cloud infrastructure has enabled engineers to run large-scale simulations more efficiently. The findings indicate that deploying legacy but powerful solvers like Marc through a cloud-native platform not only preserves their robust features and rich legacy but also democratizes access to high-end nonlinear analysis tools. This shift can empower a larger segment of the engineering community, promoting innovation and accelerating product development cycles.
Presented By Nima Ameri (Rolls Royce)
Authored By Nima Ameri (Rolls Royce)Shiva Babu (Rolls-Royce PLC) Marco Nunez (Rolls-Royce PLC) Yashwant Gurbani (Rolls-Royce PLC)
AbstractPresented in this paper is the approach conducted to democratise the adoption of conditional Generative Adversarial Networks (cGAN) on the cloud for preliminary engineering design applications. This work addresses a number of challenges associated with the use of cGAN networks to engineering applications and explores them through the illustration of various use cases. In contrast to other applications, accuracy from synthetic data plays a crucial role within an engineering context; this emphasis on accuracy puts increased attention on the correct execution of each step involved in the process for training such models. such as data preparation and architecture configuration. Furthermore, a range of additional non-technical considerations highlight the cloud as the best suited solution to access higher and scalable computational resources as well as specialised COTSs technologies. To address this, Rolls-Royce has partnered up with Databricks, leveraging its Data Intelligence Platform from the Rolls-Royce Data Science Environment (DSE) hosted on Microsoft Azure: Rolls-Royce’s DSE is a highly integrated platform of world leading tools and technology which enables users to develop and deploy analytics, data science and machine learning in a secure and scalable manner within the company’s strategic digital environment, while allowing access to third parties including academic partners and suppliers in a safe and controlled manner. The adoption of cloud technologies was aimed at achieving a significant reduction in runtime, with a target factor of 30 when compared to the equivalent on-prem run. Furthermore, an additional goal was the development of a generalised framework for the identification of the optimal network architecture and hyperparameters for a given use case. This work will demonstrate a solution to this goal by leveraging and combining a number of technologies: this includes the use of Ray package for hyper-parameter and -architecture optimisation, and the adoption of MLflow for the management of the GAN models lifecycle and experiment tracking.. Particular attention was also given to data management and governance of the engineering data which comprised of a combination of images, tabular data and metadata produced by dedicated physics-based engineering softwares. To this end, data was imported and converted into industry standard “delta format” which is optimal for cloud and distributed computation. Finally, the data governance framework was provided by Databricks’s Unity Catalog which establishes a crucial framework for compliance-centric industries, such as aerospace. The effectiveness of the approach is demonstrated with engineering use cases of growing complexity.
Presented By Cristian Bagni (HBK)
Authored By Cristian Bagni (HBK)Artur Tarasek (NIO Performance Engineering Ltd) Andrew Halfpenny (Hottinger Bruel & Kjaer (HBK)) Michelle Hill (Hottinger Bruel & Kjaer (HBK))
AbstractThe demand for lightweight structures is seeing an increasing trend, driven by the transition towards more sustainable ways of transportation and the imperative to reduce emissions and fuel consumption. Adhesive joints and hybrid joints, which combine adhesive bonding with traditional joining techniques such as spot welding and riveting, are emerging as a viable solution for achieving lightweight components. These joints are gaining traction in the transportation sector due to their ability to leverage the benefits of each joining method. To enhance the design of adhesive and hybrid joints and mitigate the risk of fatigue failures during service, the transportation industry requires efficient, robust, and user-friendly methods for modelling and estimating the fatigue life of these joints. The fatigue performance of hybrid joints is affected by various factors, including the material, thickness and surface preparation of the adherends, the type of adhesive and, in the case of hybrid joints, the type, size, and quality of the mechanical fasteners. Consequently, it is advisable to derive custom Stress-Life (SN) curve parameters from tests on adhesive or hybrid joint specimens that accurately represent production joints.This study introduces a pragmatic approach for estimating the fatigue life of adhesive and hybrid joints between metals, which can be readily implemented by companies in the transportation industry. The proposed approach includes Finite Element (FE) modelling strategies and recommendations to obtain the necessary stress data with minimal modifications to existing FE modelling practices commonly used in the automotive industry. These strategies ensure that the FE models are computationally efficient, exhibit low mesh sensitivity, and do not require congruent meshes. Additionally, the method encompasses fatigue analysis and life estimation of adhesive and hybrid joints between metals using a Stress-Life (SN) based analysis tool. Finally, it includes a testing and data analysis framework to generate custom SN curves suitable for use in analysis. The generation of custom SN curves is also demonstrated using real test data obtained as part of an extensive testing programme on metal lap-shear and coach-peel hybrid joints with different types of mechanical fasteners.
BiographyDr. Cristian Bagni holds a Master’s Degree in Civil and Structural Engineering (Unversità degli Studi di Parma, Italy) and a PhD in Structural Engineering (The University of Sheffield) developing a new Finite Element methodology to assess the static and fatigue behaviour of notched and cracked components. He has co-authored papers on Computational Mechanics and Fatigue, and he has reviewed papers for International Journals including ‘Fatigue & Fracture of Engineering Materials & Structures’ (FFEMS), and ‘Theoretical and Applied Fracture Mechanics’. After working for several years at the University of Sheffield’s Advanced Manufacturing Research Centre (AMRC) on high profile aerospace research projects, he also gained extensive experience on advanced manufacturing processes. Cristian joined Hottinger Brüel & Kjær in July 2020 as Technologist for Fatigue and Fracture, and amongst his activities he leads research on the fatigue behaviour of joints. He also supports the Advanced Materials Characterisation & Testing (AMCT) facility with the analysis and post-processing of fatigue test results, and consequent characterisation of both joints and parent materials.
Presented By James Bailey (UKAEA)
Authored By James Bailey (UKAEA)Michelle Baxter (UKAEA) Matthew Mavis (UKAEA) Adam Shine (UKAEA) Oliver Marshall (UKAEA) Samad Khani (UKAEA) Tom Deighan (UKAEA) David Pickersgill (UKAEA)
AbstractThe successful development of fusion power will rely on the key technology of breeder blanketcomponents. These components surround the fusion plasma, providing significant heat for power generation, and breeding the tritium fuel on which the plasma reaction depends. However, the technology for these components remains at a low technology readiness level – many concept designs have been developed for the ITER test programme and for subsequent powerplant demonstrators, but none has yet been tested in an operational tokamak environment. This harsh environment includes multiple heat loads, the impact of plasma disruption events and high neutron fluence from the plasma. Successful designs must withstand the loads imparted while demonstrating high performance for tritium and heat generation. Designs must also be tailored to different tokamak configurations and differing programme priorities. In this project concept evaluation and down-selection were achieved using a systems-engineering approach to systematically gather requirements, generate concepts and then down-select. A key part of this was utilising systems simulation and analysis workflows. This enabled a consistent evaluation across the large number of different blanket designs and material options at an appropriate fidelity for the design stage. Linking systems simulation, modelling the hydraulic, thermal and structural aspects, with neutronics analysis for the tritium breeding performance enabled consideration of the integrated problem and the trade-offs within the competing requirements of the breeder blankets. The analysis workflows have utilised existing COTS solutions, integrating the neutronics software and systems simulation, while maximising the benefit from the existing algorithms and methods implemented in the tool. A library for systems simulation has been developed, building the representation of blanket concepts from base elements implementing 0D and quasi-1D models. This approach extends the applicability of the library to enable the analysis of other components, within and outside of fusion. The systems-simulation library, ARTEMIS, and analysis workflows will be presented alongside the leading concepts and their performance, demonstrating the benefit of this approach for the concept down-selection of complex and novel systems.
Presented By Ali Nassiri (Ohio State University)
Authored By Ali Nassiri (Ohio State University)Phillip Aquino (Honda Research Institute USA) Allen Sheldon (Honda Research Institute USA, Inc.) Sogol Lotfi (Honda Development & amp Manufacturing of America, LLC) Duane Detwiler (Honda Research Institute USA, Inc.)
AbstractThe fabrication process for lithium-ion electrodes typically involves four main steps: mixing, coating, drying, and calendaring. Despite extensive research, the impact of each step and its associated parameters on the final electrode microstructure and performance has not been thoroughly investigated. This study presents a novel multi-scale multi-physics computational framework to predict process-to-property relationships in lithium-ion cathode manufacturing. This work is motivated by two critical challenges in battery production: 1) the difficulty of implementing lab-scale research findings in mass production without substantial modifications, and 2) the low yield of acceptable batteries, where achieving failure rates below 10% remains challenging.In this research, first, using discrete element method (DEM), a slurry containing active materials, a carbon-binder domain, and solvent was generated. Second, a macro scale computational fluid dynamics (CFD) model was constructed to replicate the coating machine and drying step. Third, DEM was coupled with the CFD and thermo-mechanical finite element models to simulate the evaporation phase and predict the temperature distribution and fluid flow regime inside the coater as well as across the cathode electrode, thickness of the electrode, and the overall shrinkage of the sample. Finally, the electrode was further compressed to mimic the calendaring step. By incorporating the damage criterial, the numerical simulation could also predict the onset of crack formation and delamination phenomena which are considered the primary failure mechanisms in cathode electrode manufacturing.To validate the numerical simulation results, experimental test was conducted to fabricate NMC 111 cathode electrode. Then, materials characterizations including scanning electron microscopy (SEM), energy-dispersive X-ray spectroscopy (EDS), and micro-CT were performed at various sections to analyze the microstructure formation. The experimental findings were then compared to simulation results at corresponding spatiotemporal points within the computational domain. Good agreement was established between the results for the process parameters investigated. The numerical simulation results demonstrated how variations in manufacturing process parameters affect the microstructure, performance, and aging characteristics of lithium-ion cells. Additionally, the simulation results confirmed that by employing high-fidelity physics-based numerical simulation at different scales, the optimum process parameters, in particular, drying rate and temperature, can be identified for producing defect-free electrodes with tailored microstructures.
Authored & Presented By Timothy Senart (CRM Group)
AbstractCold Roll Forming (CRF) is one of the most productive processes for manufacturing thin-walled products with constant cross section. It consists in a continuous bending operation of a long metal sheet. Roll-forming processes gained high interest in the industry to form Ultra-High Strength Steels (UHSS). However, CRF remains a complex process and it is affected by different problems, such as wave, torsion, twist or bow defects and elastic spring back [3]. The aim of the analysis is to predict these defects and optimize the roll-forming process.This work introduces a time-effective numerical methodology for the study of steel sheet cold roll-forming processes. The goal of this roll forming FEA is to predict strain and stress fields occurring during the process for a given roll forming line. The outcome of the simulation are expected to help in designing the process, by predicting the occurrence of defects due to unbalanced springback or excessive longitudinal strain, which is a crucial factor in the quality evaluation of rolls design [4]. Furthermore, the resulting strain field can be used to initialize analysis on final products, and take in account the impact of their manufacturing process. The simulation of a steady-state process of the sheet uncoiling and passing through the rolls would require a great amount of computational power and time. To decrease the process lead time only a small portion of the metal sheet is modelled. The continuity of the process is ensured by guiding the ending rows of nodes of the modelled portion of the studied sheet. The results from the FEA and the model assumptions are compared to the experimental tests. Strain gauges and 3D scan are employed to study the strain field of the flange and the kinematics of the process. Longitudinal strain (z-axis) and sheet geometry are employed to validate the FE model. The roll forming line consists in 6 forming stations, placed at a constant distance of 500 mm to each other. The specimen sheets are 2 meters long and 200 mm wide, with a 3D scanned area adjacent to the centerline and four unidirectional strain gauge on the flange (see fig. 3). The disposition of 3 strain gauges at different longitudinal positions, with the same distance from the folding line, allows to exclude the influence of the blank free edges on the two gauges at the centerline.The maximum value of measured longitudinal strain is 20% lower than the ratio between yield stress and elastic module, and there are no evident defects on the resulting profile.The aim of the project is to create a FE methodology able to give fast and reliable results about the state variables inside part throughout the cold roll-forming process. The outcome of this analysis will be employed to initialize the stress and strain state on existing FE models of parts produced through roll forming. The implementation of effects due to manufacturing is expected to improve the accuracy of structural analysis on final products, which are nowadays studied assuming a zero strain and stress field at the beginning of the analysis.To meet the requirements of the industry, the model is designed to be efficiently set-up and computed in a short time. A script makes the procedure of pre-processing fast and versatile, starting from the roll-forming flower of the line. Furthermore, the commercial FE software LS-DYNA simplifies the initialization of stress and strain states due to manufacturing into existent FE models.The analysis results show a good correlation within a competitive lead time for industrial applications.
BiographyMechanical engineer : 2014 CRM GROUP : 2019
Presented By Henan Mao (Ansys)
Authored By Henan Mao (Ansys)Dandan Lyu (Ansys) Tieyu Zheng (Microsoft Corporation) Wei Hu (Ansys) Devashish Sarkar (Ansys)
AbstractAccurate modeling of solder joints in printed circuit boards (PCBs) under dynamic loads, such as shock and vibration, is critical for ensuring the reliability and durability of electronic devices. These components are highly susceptible to failure under dynamic loading conditions, making it essential to model their behavior with precision. Traditional modeling techniques often rely on simplified beam elements to improve computational speed or solid elements for more detailed analysis. However, these approaches have limitations in balancing efficiency and accuracy, especially when critical components like solder joints require detailed modeling. To address this challenge, this study introduces a two-scale co-simulation approach that optimizes both computational efficiency and accuracy in modeling complex systems, particularly in the context of solder joint behavior within PCBs.The core of the two-scale method is the division of the system into two models: a global model representing the overall assembly or structure and a local model focusing on detailed components, such as the solder joints. The global model is kept coarse to reduce computational costs, while the local model is refined to accurately capture the behavior of critical regions under dynamic loads. A key advantage of this method is the two-way communication between the global and local models, where detailed results from the local model are fed back into the global analysis at each time step. This allows the localized effects of the solder joints to influence the overall performance of the PCB, ensuring more accurate results compared to traditional single-scale models that ignore such localized interactions.In comparative tests, the two-scale co-simulation method demonstrated significant improvements in both accuracy and computational efficiency. Benchmarking results showed that the method achieved an error rate of only 10% compared to traditional modeling approaches. More importantly, it reduced computation time by up to 73% in the best case, highlighting the method’s potential to provide highly accurate results while saving valuable computational resources. This efficiency is particularly crucial in dynamic simulations, where complex, time-consuming calculations can otherwise hinder the design and optimization process.The two-scale approach is highly versatile and can be applied to a wide range of engineering problems that require detailed modeling of localized phenomena. While this study focuses on solder joint modeling in PCBs, the method is equally effective for other critical components, such as complex material behaviors, intricate geometries, and structural elements in industries like aerospace, automotive, and electronics. The flexibility of the two-scale approach ensures both computational efficiency and accuracy, making it well-suited for a variety of analyses. By integrating the detailed local model results into the global analysis, this method improves the accuracy and efficiency of simulations across diverse engineering applications.In conclusion, this study presents a novel two-scale co-simulation method that balances computational efficiency and high-resolution detail, providing an effective solution for modeling solder joints under shock and vibration conditions. Its versatility makes it a valuable tool for engineers across multiple industries, ensuring more reliable and accurate simulations for modern technology.
Presented By Oliver Kunc (DLR - Deutsches Zentrum für Luft- und Raumfahrt)
Authored By Oliver Kunc (DLR - Deutsches Zentrum für Luft- und Raumfahrt)Nicolai Forsthofer (DLR – Deutsches Zentrum f& , 252 r Luft- und Raumfahrt) Marius Schw& , 228 mmle (DLR – Deutsches Zentrum f& , 252 r Luft- und Raumfahrt) Paul-Benjamin Ebel (DLR – Deutsches Zentrum f& , 252 r Luft- und Raumfahrt)
AbstractIn many areas of mechanical engineering, parts and assemblies are designed iteratively with increasing levels of detail and sophistication. It is common that conceptual designs lead to preliminary designs with coarse geometric descriptions. Furthering these designs necessitates initial structural mechanical assessments.This work describes how existing software/tool platforms for low-fidelity design may be systematically enriched by fully-fledged Computational Structural Mechanic (CSM) capabilities. By means of this enhancement, it is possible to evolve the designs based on sound CSM criteria, increasing the fidelity levels of the designs. Strong focus is put on automation.This paper reaches out to a broad audience seeking efficient CSM processes applicable to their specific contexts. A particular software serves as a continuous example for the description of practical CSM related needs in early design stages. As the authors work in the field of aero-engine design, the Gas Turbine Laboratory (GTlab) showcases how general collections of tools or collaborative engineering platforms of any engineering domain may be equipped with CSM processes.More specifically, it is assumed that a software framework (be it a loose collection of tools or a well-integrated platform) exists that may yield conceptual designs of parts, assemblies or whole machinery. These concepts are usually based on elementary assessments such as sketches, efficiency estimations, major requirement specifications, thermodynamic cycles, or plain experience. Furthermore, the framework is assumed to also yield rough geometric designs of major components, considering e.g. installation space, mass constraints, constructive solid geometry, “3D inflations” of 1D or 2D sketches, etc. but without considerations of non-trivial structural mechanics. Refinement of such coarse designs usually requires CSM assessments such as static integrity, modal behavior, or safety factors for different materials. This paper is about the software design of modules that provide such CSM capabilities.Structural Mechanical Modules (SMMs) of engineering platforms pose a number of challenges to software developers. The authors have gained experience with the development of an SMM in the context of turbomachinery and would like to make this experience available to general developers of SMMs of other engineering disciplines. Five major requirements are placed on SMMs:1. Automation2. Embedding of proven external tools, preferably but not exclusively Open Source Software (OSS)3. Usability for CSM non-experts4. Sustainability (i.e. maintainability, extendibility and quality assurance)5. Reliability (regarding the results)Fulfilling these requirements inevitably leads to questions of the specific software design. A comprehensive list of practical SMM design challenges is given:1. Re-use vs. re-develop legacy in-house tools2. Modularization vs. integration3. Usability and User Experience (UUX) 4. Choices of external tools for meshing, mapping, finite element analysis, etc.5. Data formats and interoperability6. TestingState-of-the-art possible solutions are transparently discussed for each challenge. The authors state the choices they made in their respective use-case, give the reasoning behind these choices, and advocate for certain general SMM architectures and designs.
BiographyMaster of Science in Mathematics at University of Stuttgart. PhD on multi-scale mechanics at University of Stuttgart. Focus on Reduced Order Modeling, efficient low-level implementations and Machine Learning. Since 2020 I am a scientist at German Aerospace Center (DLR), Institute of Structures and Design where efficient software implementations are put to work in my favorite domain of engineering: aviation. Focus on automating CSM for jet engines and turbo-machinery. Automating collaborative design, speed-ups with Machine Learning and enhance usability via Large Language Models. Applications are new designs of jet engine components and the assessment of novel materials.
DescriptionThis short course provides a brief overview of the full course that is offered, discussing simulation process integration and optimization methods that engineers could use to enhance their working methods and improving their designs. The course provides information and guidelines on multi-objective and multi-disciplinary optimization using many variable types, including restrictions and decision making. Many algorithms are discussed in a practical way including Artificial Intelligence and statistical methodologies to help guide engineers in the creation of successful, efficient optimization strategies.https://www.nafems.org/training/courses/process-integration-and-design-optimization-a-practical-guide/
13:30
Presented By Michael Scott (Coreform LLC)
Authored By Michael Scott (Coreform LLC)Matthew Sederberg (Coreform LLC) Michael Scott (Coreform LLC)
AbstractIsogeometric analysis (IGA) has long been heralded as a transformative approach to simulation, uniting the precision of computer-aided design (CAD) with the rigor of engineering analysis. Coreform’s Flex Representation Method (FRM) takes this promise to the next level, delivering a revolutionary workflow that eliminates the traditional bottleneck that is mesh generation. By immersing CAD geometry into a background mesh of high-order spline elements and employing innovative trimming, quadrature, and solver techniques, FRM achieves a meshing-free approach to simulation that dramatically accelerates setup time without compromising accuracy or capability.Coreform Flex, the first commercial native IGA solver, leverages FRM to tackle some of the most complex nonlinear engineering challenges faced today. These include large deformations, contact mechanics, incompressible elasticity, plasticity, and multiphysics problems, all of which are difficult to model with traditional methods. FRM’s ability to handle complicated geometries without labor-intensive defeaturing and meshing offers engineers a new level of flexibility and efficiency. Coupled with the superior numerical properties of high-order splines, this approach ensures robust and reliable simulations, even for challenging real-world scenarios.In this presentation, we will delve into the foundational concepts behind FRM and demonstrate its capabilities on real-world engineering applications. Attendees will gain insight into how FRM simplifies workflows, enhances simulation fidelity, and enables rapid analysis of complex systems. We will also provide a overview of the underlying mathematics and algorithms that make FRM uniquely effective.Additionally, we will discuss which applications of IGA and FRM are likely to deliver the most immediate value for practicing engineers. From aerospace to defense to automotive and biomedical, Coreform Flex offers a transformative solution for industries seeking to solve their most demanding problems efficiently. We will also touch on emerging technologies that complement IGA and share our perspective on how simulation tools can evolve to better meet the growing demands of the CAE community.This session is designed for engineers, researchers, and simulation experts who want to explore cutting-edge methods that reduce their total time to solution, while expanding the scope of solvable problems. Join us to discover how Coreform Flex is redefining simulation and unlocking the full potential of IGA, empowering engineers to tackle challenges once thought insurmountable.
Presented By Mathilde Laporte (DLR - Deutsches Zentrum für Luft- und Raumfahrt)
Authored By Mathilde Laporte (DLR - Deutsches Zentrum für Luft- und Raumfahrt)Robert Winkler-Hoehn (Deutsches Zentrum fr Luft und Raumfahrt (DLR))
AbstractThe method described below is part of the research project KI-MeZIS (“AI Methods in Condition Monitoring and Demand-Adapted Maintenance of Rail Vehicle Structures” (funding code 19|21024D)). The aim of this project is to develop and apply the potential of artificial intelligence (AI) methods for monitoring rail traffic. With the help of sensors placed on the front, on the supporting structure and on the bogie of the train, AI methods should be able to evaluate and interpret the data.Nowadays, aerodynamics and lightweight constructions are the most important requirements for rail vehicles. However, standards are currently used for the design of rail components and most of them are not actual anymore and the origin has been lost during the years. Moreover, it often leads to an oversizing of the components and thus to higher costs in production and operation due to a higher mass. In order to respect our lightweight requirements, new materials can be used or the components can be designed according to the loads applied during operation. This has the advantage that the components can be designed according to the real load and thus lead to a mass reduction overall. In our case, sensors, such as accelerometers and strain gauges are used to determine the acting loads.The main goal is to use then these loads for Finite-Element-Method(FEM) simulations or fatigue analysis. Forces are the easiest loads to define on a FEM model. But the force cannot be so easily determined from accelerometers or strain gauge because of non-linear problems. That is why, Machine learning method is used to solve the inverse problem. Sensors data, named acceleration and strain are given to the neuronal network (NN) and as output, the force is given. In order to train the NN, a large amount of couple acceleration/force or strain/force are needed. To that end, a FEM model was created to generate these training data. A code has been developed to create and run automatically FEM simulations, changing the input force.Finally, the output force results from the NN model can be used for FEM simulations, design optimization, like topology optimization for example or/and fatigue analysis.
Presented By Javier Domingo Lopez (Airbus Defence and Space)
Authored By Javier Domingo Lopez (Airbus Defence and Space)Ismael Rivero Arevalo (Airbus Defence and Space)
AbstractThe fatigue behaviour of 2024-T351 aluminium alloy, commonly used in aeronautical structures due to its favourable strength-to-weight ratio, is analysed under variable-amplitude axial loading. This study leverages advanced Continuum Damage Mechanics (CDM) simulations implemented through a UMAT (User-Material) subroutine in Abaqus, alongside a cutting-edge simulation environment governed by a UEXTERNALDB subroutine. The UEXTERNALDB subroutine incorporates two essential functions that enhance the simulation’s efficiency and accuracy:1. Loading Cycle Control: The subroutine reads a binary input file containing the loading sequence during the initialization phase of the analysis, enabling precise control over each loading cycle applied in the simulation. By calculating the necessary information to assess fatigue damage per loading cycle based on interpolated stress states within each analysis step, UEXTERNALDB minimizes the computational resources required. This cycle-by-cycle optimization notably reduces the total simulation time, making it particularly suitable for long and complex loading sequences, unlike conventional methods that require stepwise evaluation.2. Enhanced Material Calibration for CDM: UEXTERNALDB accesses an external database that contains a neural network specifically trained to model the fatigue behaviour of the 2024-T351 alloy under complex stress states. This neural network is subsequently integrated with the UMAT subroutine at the integration point level, providing a sophisticated material model capable of accurately predicting fatigue under variable loading conditions. This approach is particularly beneficial for evaluating complex stress scenarios, including high stress ratios, which are typical in aeronautical applications.The simulated results derived from this CDM-based virtual testing framework will be validated against benchmark physical test results to assess accuracy. The feasibility of applying this novel simulation environment to predict fatigue under variable loading sequences will be critically evaluated to support its potential use in the aeronautical industry.In conclusion, this simulation framework could represent a significant advancement in fatigue prediction, reducing costs and time associated with physical testing, thereby supporting safer, more efficient aircraft design.
Presented By Peter Wimmer (Virtual Vehicle Research)
Authored By Peter Wimmer (Virtual Vehicle Research)Christoph Klein (Virtual Vehicle Research GmbH)
AbstractNumerical simulation is a key element on our way to “Vision Zero”. The EU-funded project “V4SAFETY” (Vehicles and VRU Virtual eValuation of Road Safety, https://v4safetyproject.eu/) addresses this topic by providing a widely accepted and harmonized predictive assessment framework for road safety. An essential part are guidelines and recommendations for models and simulation components to properly address different types of safety measures. This paper presents the method for the simulation structure setup that was developed in V4SAFETY. The method can deliver a simulation structure (models and interactions) that is specific to a given impact assessment task.First, a suitable generic simulation structure is defined. This simulation structure consists of models and their interactions during runtime. Which models the structure contains is mostly defined by the combination of the relevant scenario, especially the road user to be protected, and the safety measure whose impact should be assessed. It needs to be noted that, in the context of V4SAFETY, there is always a vehicle in the scenario - as most crashes are vehicle-related. The structure consists of a pre-crash and an in-crash related part. The pre-crash part consists of models active before a crash occurs – typically, this is a multi-physics setup as various simulation domains must be connected. The in-crash part consists of models that are used for in-crash simulation, typically finite element, multibody or surrogate models. A three-step process is defined to come to an appropriate simulation structure:1) A generic structure is selected out of pre-defined structures. The selection is based on information about the type of safety measure and the road user to be protected.2) Depending on the chosen type of safety measure, the structure is further detailed, especially the in-crash part. This step is based on the evaluation metric chosen to quantify the impact of the safety measure. 3) The simulation structure is further refined using additional information given in the evaluation scope/research question.The whole process is implemented in an interactive browser-based tool allowing the user to quickly come to an evaluation-scope-specific simulation structure. This information can then be used to export the simulation structure in a standardised format like SSP (System Structure and Parameterization) and further to set up a process to automatically come to a complete, executable simulation model for safety impact assessments of road safety measures.
BiographyPeter Wimmer is lead researcher at Virtual Vehicle Research in Graz, Austria. He is responsible for development of numerical simulation and effectiveness assessment methods for integrated vehicle safety systems. He holds a degree in mechanical engineering and is working with numerical simulation methods since almost 25 years, both in industrial application and research context.
Presented By Augusto Moura (DCS Computing)
Authored By Augusto Moura (DCS Computing)Christoph Kloss (DCS Computing GmbH) Christoph Goniva (DCS Computing GmbH)
AbstractComputational modelling is invaluable in solving complex problems in various industries. But effective modelling requires balancing multiple, often conflicting, attributes such as accuracy, speed, simplicity, and predictive power. These requirements invariably come with trade-offs: a more accurate model might be slower or more complex; a faster model may have limited predictive capability, and so on. In this context, what makes a model good? Successful model development needs to account for three axes: the needs of the product, the people, and its impact and purpose. On the product side, open-source innovation is a cornerstone of modern computational modelling. By fostering collaboration and allowing users to leverage existing tools, improving and modifying upon them, these open-source ecosystems can lead to massive improvements in modelling capabilities. Rather than revolutionary ideas driving innovation, it is these incremental enhancements leading to great progress. This work demonstrates a plethora of examples of the application of Discrete Element Method (DEM) to solve problems in a wide range of industries, from pharmaceuticals to materials processing, generating invaluable insights for process optimization and tackling complex problems. Scientists and engineers can be empowered by modelling frameworks, centralized modelling platforms where expertise can be shared and workflows streamlined, allowing people to focus on solving application specific problems. This work provides a review of varied modelling approaches taken by the authors over the years, encompassing many different industries, such as mixing applications, battery manufacturing, extruders, pharmaceutical drug manufacturing, agricultural machinery, and so on. It also outlines the concept of an in-development cloud-based calculator platform for facilitating collaboration and innovation in science and engineering. The importance of modelling lies in its broader purpose: the generation of solutions that balance efficiency, precision, and practicality. By emphasizing incremental improvement, open collaboration, and adaptability, computational modelling provides a robust foundation for addressing the multifaceted challenges faced by modern industries.
Presented By Bulent Acar (Replon Machine and Tool Industry and Trade)
Authored By Bulent Acar (Replon Machine and Tool Industry and Trade)Ali Yetgin (REPKON Machine and Tool Industry and Trade Inc.)Emre Ozaslan (REPKON Machine and Tool Industry and Trade Inc.)
AbstractIn hot ironing process, cup shaped work piece is pushed through a set of dies with decreasing inner diameters. A punch applies pushing force from inner cavity while supporting work piece. The thickness of the work piece is determined by punch and ring diameters. Cylindrical parts with precision thickness can be manufactured using this process. When reduction ratio which is defined as final thickness over initial thickness is high, more than one die is used and the thickness of work piece is gradually thinned. Maximum punch force should be carefully considered for successful design of manufacturing operations. One of the parameters limiting maximum force is machine load capacity while another aspect is structural capacity of the work piece since it is at elevated temperature. Inner surface of the dies interacting with work piece endure contact pressure, work piece sliding and elevated temperature. Final dimensions of the work pieces is affected by die inner profile and temperature change during process. The speed of punch movement is another important parameter since material behavior is strain rate dependent at elevated temperatures. Also, contact time between work piece, dies and punch is important for temperature change in the dies. Die inner profile consists of die angle at the entrance and exit as well as diameter of inner opening. In addition, stiffness of die and its surrounding is important for final work piece outer diameter. Under the action of punch movement, a process force with axial and radial load components is exerted on the dies. Therefore, radial expansion must be controlled for precise dimensioning of final work piece.Successful operation of ironing process depends on different parameters such as work piece temperature change, friction, deformation of dies. Understanding effect of parameters and interaction between them can be achieved by accurate numerical modeling of whole process. In this study, a finite element model of multi stage hot ironing process was constructed and verified with experimental study. Dimensions of the final part and process force was compared with numerical results. After finite element model was verified, effect of parameters such as axial distance between dies, friction, punch speed, die inner profile was investigated. Maximum punch force, die wear, die temperature and deformation are important output parameters monitored under different configurations. Since complete production environment was included in the finite element model, complex interaction between parameters of a complicated production process can be identified.
Presented By Philipp Wolfrum (Siemens)
Authored By Philipp Wolfrum (Siemens)Gabor Schulz (Siemens Industry Software NV) Mike Donnelly (Siemens Industry Software Inc.)
AbstractWe present an approach that allows accurately simulating hybrid electric aircraft by combining systems level simulation tools with electronics simulation tools through co-simulation.Hybrid electric aircraft offer a way to reduce the environmental impact of the aviation industry. They promise to reduce greenhouse gas emissions significantly, while also reducing noise emissions specifically during ascent and descent close to population centers. The development of hybrid electric aircraft requires being able to model and predict their behavior across different physical domains---aerodynamics, mechanics, thermal dynamics, and electronics. Relevant time scales range from a flight mission duration of several hours down to the sub microsecond switching dynamics of the battery converters. To be able to simulate the different physical domains as accurately as possible, we use dedicated modeling tools for the system level aspects (Simcenter Amesim (SC-Ame)) and the power electronics aspects (HyperLynx AMS (HL-AMS) and PartQuest Explore (PQE)). In order to be able to investigate the coupled behavior of the different domains across the whole range of relevant time scales, we have set up a co-simulation toolchain which couples the aircraft model in SC-Ame as the primary tool with the model of the (power) electronics in HL-AMS as secondary tool. The electronics model is created in PartQuest Explore for early concept exploration, but can be automatically converted to Hyperlynx AMS for further physical (PCB) design development and verification. The electronics models can support SPICE as well as the IEEE/IEC Standard VHDL-AMS. This methodology supports a “supply chain” for device models, where component manufacturers can provide full electrical and package thermal simulation models that their customers can assemble into functional schematics of their applications. Using this co-simulation approach has enabled us to investigate effects which could not be simulated using any of the tools alone. We have modeled the power electronics that provide a stable voltage from batteries to the airplane e-motor at a sub microsecond resolution, revealing the dynamics in the individual DCDC converters. This enables the system level simulation to correctly assess e.g. power losses in the converters and actual DC output voltage. On the other hand, the system level simulation of flight dynamics and battery discharging behavior provides information to the electronics simulation necessary for simulating the exact operation of the battery management system and the converters. In summary, our co-simulation allows us to accurately simulate aircraft behavior, including the failure of specific electronic components under certain control algorithms. This capability enables us to validate mitigation measures, such as component redundancy and failure handling software, ensuring aircraft safety even in the event of component failures.
Presented By Tomasz Płusa (Valeo)
Authored By Tomasz Płusa (Valeo)Grzegorz Basista (Valeo) Nicolas-Yoan Francois (Valeo)
AbstractSimulation departments in manufacturing companies are often focused on the performing of many repetitive activities while working on the development of selected types of products. This work is most often related to preparing geometry, using a simulation template, creating a mesh, setting boundary conditions and physics, collecting simulation data and creating reports - according to standards. Automation of repetitive processes can significantly reduce the engineer's working time needed to obtain results and devote it to creative work. It also allows for minimizing the risk of errors in the simulation, which in consequence will also save time.Valeo Power division focuses on designing heat exchangers for heat management in cars. During the product development phase of the car radiator, which is the most common heat exchanger, two basic CFD simulations are performed - thermal performance and thermal shock simulation. According to the standards, Valeo radiators consist of headers, tanks, side plates, air fins and tubes. All the parts, as well as working fluids - coolant and air are parts of CFD simulation. Valeo builds radiators with different tubes and fins technologies, depending on the project's needs. Due to complex geometry of air fin and tube direct simulation is not applicable in industrial use. A simplification is needed to reduce simulation model size. One way to achieve that is to replace complex geometry by characteristics of pressure drop and heat transfer for tube and air fin. Characteristics are calculated first on the submodels and applied after to the global CFD model with simplified geometry of tube and air fin. Calculation of characteristics, preparation of the geometry with special naming convention, creation of reports according to the standards are main stages of the thermal performance simulation of the radiator. For thermal shock simulation there is also a temperature mapping stage for the structural simulation. All these steps involving repetitive and schematic operations performed by an engineer can be replaced by an automatic process.This paper presents a fully automated CFD simulation process of thermal performance and thermal shock for automotive radiators. The only manual operation within the process is CAD geometry preparation and extraction of internal fluid volumes. This process in Catia is supported by the VBA script which guides the engineer and automatically creates surface geometry according to the naming convention. Boundary conditions and specifications are provided by the engineer throughout the graphic user interface in the excel file. At this stage simulation input files, including StarCCM+ simulation file and java macro with mesh parameters, simulation type boundary conditions, etc. are created. Finally, Simcenter Star CCM+ managed by the previously created Java input files runs the simulation from scratch, creates PowerPoint and Excel reports in batch mode on the simulation cluster after calculation.The automated process significantly shortens the time from receiving the necessary input data to deliver results in the form of reports and temperature maps to the project.
13:50
Authored & Presented By Gerd Schwaderer (ESI Germany)
AbstractReverse engineering (RE) has traditionally served as a bridge between physical artifacts and their digital representations, enabling applications from product benchmarking to piracy mitigation. While optical scanners and point clouds dominate the RE landscape, the increasing prevalence of simulation meshes introduces a unique opportunity to extend RE methodologies into the virtual domain. A simulation mesh, despite its origins in computational analysis, shares many structural characteristics with a scanned mesh, suggesting it can be processed using similar RE techniques.This paper explores the integration of reverse engineering methodologies into simulation workflows, focusing on converting simulation meshes into CAD models to enhance compatibility and usability. Such conversions, while promising, often encounter limitations: the output is typically non-parametric, reduced to static freeform patches that lack the flexibility of true CAD geometry. Addressing these challenges, we provide a comprehensive overview of current techniques for mesh-to-CAD transformation, including automated surfacing, parametric reconstruction, and hybrid approaches.Key advancements discussed include adaptive patching for improved accuracy, incorporating geometric primitives into mesh reconstruction, and leveraging software tools for parametric solid modeling. Each approach's benefits and drawbacks are critically examined, with emphasis on balancing usability, computational efficiency, and geometric fidelity. Case studies illustrate how these techniques enable applications such as enhanced simulation validation, manufacturing integration, and digital twin creation.While no single method achieves universal applicability, the alignment of RE techniques with simulation outputs presents a scalable framework adaptable to various use cases. By treating simulation meshes as virtual "scanned" data, the paper argues for a paradigm shift that promotes tighter integration of simulation and design workflows. The findings are supported by insights from existing RE solutions, offering a roadmap for adopting these methodologies across industries.This study underscores that bridging simulation and CAD is not merely a technical challenge but a strategic opportunity to unlock the full potential of simulation data. By fostering interoperability and efficiency, reverse engineering becomes a linchpin in the next generation of digital engineering ecosystems.
Presented By Yashwant Liladhar Gurbani (Rolls-Royce Group PLC)
Authored By Yashwant Liladhar Gurbani (Rolls-Royce Group PLC)Marco Nunez (Rolls-Royce plc) Harry Bell (Rolls-Royce plc) Nima Ameri (Rolls-Royce plc) Jon Gregory (Rolls-Royce plc) Shiva Babu (Rolls-Royce plc)
AbstractMany engineering solutions require technologies that rely on specialised know-how and knowledge of physics mechanisms underpinning their design and operation. As the world moves towards a digital era, current surrogate model approaches are either not fit for processing large databases, or unsuitable to deal directly with data typically deriving from computer-based analyses such as geometry representations and field quantities (e.g., stress, displacements, temperature, etc.). At the same time there is a need for enhanced design space exploration capabilities that overcome the limitations from parametric models, enabling the assessment of innovative design concepts through more free-form geometry modelling approaches. GANs are proven effective to generate hyper realistic images when trained on many different (but similar) data. From literature, there is evidence suggesting that conditional Generative Adversarial Networks (cGAN) con provide a valuable means to support engineering design by accurately predicting the results of computationally expensive simulations through the encoding of design information into 2D images [1][2]. However, there is a need for further work to identify and address some of the roadblocks hindering a wider application of this technology. This paper presents the investigation conducted to understand and address some of such restrictions identified for the use of cGAN models on different preliminary design engineering use cases. The models in this study were assessed on engineering and non-engineering data while monitoring their sensitivity to architectural and parametric changes. This deep dive helped gain a better understanding of the applications where such an approach can and cannot be used. Limitations were also identified in tasks conducted as part of the pre-processing of training data, which have driven the motivation to evaluate data encoding in more detail and highlight the need for further developments on this area . Furthermore, the portability of such an approach allows unleashing the crucial benefits its deployment into a cloud environment in terms of efficiency and cost-effectiveness whilst complying with data classification constraints.
Presented By George Korbetis (BETA CAE Systems)
Authored By George Korbetis (BETA CAE Systems)Ioannis Karypidis (BETA CAE Systems) Christos Tegos (BETA CAE Systems)
AbstractProduct design in the automotive industry is becoming increasingly demanding as new products should reach high performance standards in very short development cycles. Engineering simulation, using FEA, comes to assist in most product development stages to substitute costly experiments for new designs while speeding up the overall processes. In this direction, optimization procedures are increasingly employed during the design.Apart from FEA, fatigue analysis is a mandatory process which assures product integrity by accurately predicting products’ life. Using fatigue analysis, the engineer is able to construct stronger yet lighter structures while avoiding overdesign. Special attention should be paid to welded structures since welds often are the weakest part concerning the fatigue life. Fatigue analysis is often incorporated into the product design workflow through an optimization process that fine-tunes structures’ efficiency. In this case study, a subframe of a car is subjected to a cyclic load. These loads are derived from kinematic model of the car which runs on a Belgian block road for ten seconds. A Multi Body Dynamics solver calculates the dynamics of the entire assembly as well as the stresses and displacements on the subframe which is considered as flexible body. Several design variables are defined directly to the assembly’s FE model to control its shape and properties. Areas of high stress are selected for shape modification where high damage is likely to occur. Additionally, some non-critical areas are selected which can possibly contribute to mass reduction. Finaly, the seam-welds length is also parametrized, and respective design variables are used. The optimization process is conducted in three steps. Firstly, a DOE (Designs of Experiments) runs to investigate the design variables significance. Then, the created experiments train a Response Surface Model using the Kriging algorithm and finally, optimization runs on the defined Response Surface Model using the Simulated Annealing algorithm.The fatigue analysis is used as a part of the optimization workflow to calculate the damage on critical areas which is the objective to be improved.
Presented By Kai Liu (Siemens Digital Industries Software)
Authored By Kai Liu (Siemens Digital Industries Software)Dorlis Bergmann (Siemens AG)
AbstractIn the evolving landscape of system modeling and simulation, the integration of Knowledge Engineering and Generative AI offers unprecedented opportunities for efficiency and accuracy. This presentation explores the transformative potential of Knowledge Graph technology and Generative AI for recommendation-based creation processes of system models. Case studies and practical examples are used to illustrate how integrating knowledge graphs with AI not only enhances and expedites model creation but also ensures that these models adhere to rigorous engineering standards.Knowledge graphs leverage curated engineering knowledge in formal descriptions and serve as a robust foundation for building system models. By capturing and interlinking diverse pieces of information, knowledge graphs enable users to seamlessly access relevant data and insights, facilitate the generation of precise model recommendations and streamline the modelling process and thereby reduce the time and effort needed to construct detailed and accurate simulations.Looking ahead, we envision a future where AI plays an integral role in system modeling and simulation. To harness the power of AI effectively, we must ensure that the data it consumes is derived from a formal, organized source—Knowledge Graphs. Approaches such as Graph RAG (Graph Retrieval Augmented Generation) are particularly pertinent in this context, as they facilitate the extraction and structuring of relevant data, essential for informed AI-driven decision-making. A critical aspect of employing knowledge graphs in system modeling is the validation of this knowledge against established engineering rules. This ensures that the insights derived from the graph are reliable and conform to industry standards.In the beginning we will outline the basic concepts of knowledge graph technology and its application in system modeling. Furthermore, we will demonstrate how AI, when fed with structured data from knowledge graphs, can enhance the modeling process, offering recommendations that are both accurate and efficient.Then our presentation will delve into two main validation aspects. The first is knowledge validation during the curation phase, where rule-based techniques are used to verify the integrity and accuracy of the knowledge being incorporated into the graph, cross-checking against known engineering principles. The second is response validation within the GenAI architecture, which rigorously checks generated outputs against predefined engineering rules and standards before presenting them to users, ensuring that only validated and reliable information reaches the end-users.Finally, we will discuss the implications of this approach for future developments in system simulation, emphasizing the role of formal data structures in enabling sophisticated AI applications, supported by robust knowledge and response validation processes to maintain accuracy and reliability.
Presented By Seunghun Ryu (Hyundai Motor)
Authored By Seunghun Ryu (Hyundai Motor)Jungkil Shim (Altair Engineering)
AbstractThe types of aggregates loaded in the dump truck vary from soil or building waste to sand, gravel, and rock, and it is common to load up to 27 tons of these aggregates and repeat loading, transporting, and unloading hundreds of thousands of times. Therefore, it is necessary to predict the dump truck loading box's reliable strength and durability performance in the design stage. Generally, aggregates, such as sand and gravel do not follow hydrostatic pressure conditions due to their behaviors being different from fluids. However, so far, the load calculated by the modified hydrostatic pressure theory has been used to perform static strength analysis on the loading box, so reliability has been slightly insufficient. In addition, the strength of the hydraulic cylinder mounting portion of the loading box and the deformation and strength of the Auxiliary Pannel outside the rear gate of the loading box could not be verified by static structure analysis methods. This method cannot consider the dynamic characteristics of the load due to the behavior of aggregate particles.Therefore, in this study, the dynamic behavior of aggregates was simulated through the discrete element method, and the deformation and strength of the dump truck loading box due to the dynamic load caused by aggregates were predicted when the aggregates were unloaded. In this case, the structural analysis of the dump truck loading box using the finite element method was performed simultaneously with the dynamic behavior of the aggregates. This technique is a DEM-FEM 2-way coupling simulation. At each time step, the aggregate particle transfers the load to the FEM element, and the deformation due to the load is calculated. Then, the behavior of the aggregate particle by the deformed element is updated. Through this iterative calculation, the deformation and strength of the dump truck body due to the dynamic load of the particle can be obtained. In this study, 27 tons of gravel were loaded onto a dump truck, and the dumping process was simulated, and the stress and strain of major parts were analyzed through this. The mass flow rate of gravel during dumping was analyzed to analyze the cause of the major load distribution applied to the auxiliary panel, and a guide for the design of the auxiliary panel was proposed through this.A new reliable and efficient analysis method was proposed by evaluating the strength of the hydraulic cylinder mounting part under a large load when the aggregate is unloaded and the deformation and strength of the Auxiliary panel.
Authored & Presented By Andreas Nemetz (University of Linz)
AbstractModeling chip formation is essential to any finite element (FE) approach to simulate metal cutting processes. The intricate behavior of the chip, particularly its extreme deformation, combined with the high strain rates across the primary, secondary, and tertiary deformation zones, poses significant challenges in maintaining simulation stability. A central issue is the potential development of highly distorted elements, leading to premature termination of the simulation.The Arbitrary Lagrangian-Eulerian (ALE) method has emerged as a widely used technique in metal cutting simulations to address these issues. This method is particularly effective in managing mesh distortion issues fostered by the high strain rates during metal cutting. By combining the strengths of both Lagrangian and Eulerian frameworks, the ALE method allows for a more robust representation of chip formation. Following this approach, the relevant part of the workpiece can be delimited by Eulerian, Lagrangian, or sliding surfaces, thus achieving model reduction while still capturing critical details of the cutting mechanism.Despite the effectiveness of the ALE approach in preventing mesh distortion, challenges remain, particularly in metal cutting setups with non-constant chip thickness. In contrast to turning processes, milling yields non-constant chip thicknesses due to following a trajectory during the tool's engagement with the workpiece. This transient behavior leads to difficulties using standard adaptive mesh algorithms. They often fail to prevent premature simulation termination. Previous studies have introduced approaches to stabilize the mesh quality within the chip formation zone. These models, validated through experimental data such as measured residual stresses on the tool rake face and online temperature measurements from instrumented end mills, have shown promise in addressing the mesh stability issue. The published approach involves the imposition of kinematic ALE mesh constraints. On the one hand, these constraints modify the behavior of the mesh by ensuring that nodes in the workpiece outside the process zone follow the movement of the cutting tool edge. On the other hand, the mesh within the process zone is constantly updated to maintain a high-quality mesh and preserve simulation accuracy.This kinematic approach allows for a realistic simulation of the milling process kinematics. It represents the movement of the tool relative to the workpiece without the need for transformations of the movement path or the introduction of an artificial initial chip thickness. This method allows the simulation to follow the tool's trajectory for the first time, improving the model's accuracy and stability.Building on these advancements, this work introduces an expanded and novel strategy that further improves the stability and accuracy of the simulation. An initial mesh pattern is defined based on the specific parameters of the cutting process, ensuring that the mesh is well-suited to the expected conditions from the outset. Additionally, the strategy incorporates time-dependent ALE mesh constraints, which guide the evolution of the mesh as the cutting process progresses. These constraints ensure that nodes and their associated elements are dynamically repositioned to follow the cutting tool's movement and the chip root's changing geometry, which becomes thinner as the climb milling process advances. The simulation maintains mesh compatibility and stability over time by adjusting nodes appropriately throughout the process.The focus lies on the model-building and model-reduction aspects of chip formation. Integrating new techniques into existing FE models provides a more robust and reliable method for simulating chip formation in metal cutting processes, improving accuracy and computational efficiency.
Biography2016: M.Sc. (Dipl.-Ing) - Mechanical Engineering, TU Wien, Austria 2019: Ph.D. (Dr. mont.) - Montanistic Studies, Montanuniversität Leoben, Austria
Presented By Bruno Passone (Dassault Systèmes)
Authored By Bruno Passone (Dassault Systèmes)Martin Schulze (Dassault Systemes)
AbstractPredicting the consequences of a road vehicle's tire impact against a curb is a challenging problem that can best be solved with a multi-physics simulation approach considering the different phenomena that should be analyzed at different scales.During such a maneuver the damage is generally caused by an internal tire failure which requires comprehensive non-linear modeling including components like steel cords, fabrics, rubber compounds, and interactions between the tire itself with the rim and road surfaces. This complexity can be managed by a finite element analysis with the possibility to include geometrical, material, and contact non-linearities of the deformable body.On the other hand, the vehicle mass and inertia properties, the dynamic behavior, the suspension elasto-kinematic, and the driver feedback play an important role in this highly transient phenomenon, making the multibody system simulation the most appropriate approach with the possibility of specifying easily vehicle kinematic hardpoints, bushings characteristics, spring and damper behavior, control logics and flexible bodies from reduced order models or beam based.Due to the non-linear nature of the phenomena, the mutual interaction between tire and vehicle should be simulated to get a realistic behavior.Traditionally, one can use a multibody system simulation with empirical, semi-physical, or physical tire models to calculate the loads that occur during the impact and transfer these loads to a finite element analysis. However, the accuracy loss of the simplified tire model and the complexity of the workflow can make the results unsatisfactory with an unclear return on investment.The proposed work aims to replicate the real-world scenario with a co-simulation approach based on a highly specialized coupling algorithm that enables sub-cycling while preserving stability at the interface; creating and exchanging operators, solving interface forces to preserve velocity compatibility at the interface, and handling constraints at the interface appropriately.It will demonstrate the potentiality of the process, the analysis outcomes, and the computational performance of this approach.
Presented By Ulrich Heck (DHCAE Tools)
Authored By Ulrich Heck (DHCAE Tools)Martin Becker (DHCAE Tools GmbH)
AbstractWhile CFD methods for simpler flow conditions have already found their way into product and process development, complex problems such as multiphase or multiphysics applications often still require intensive model validation and, due to the usually long computing times, model optimisation. One way to provide the developer with a design tool for these more demanding CFD tasks is to create an automated CFD tool for a specific process that is validated and optimised for the application. This makes it particularly easy to use, especially in conjunction with cloud resources, as the frequently required high computing resources can be made available as needed. This means that multiple node computer architectures can be addressed in the background under Linux and the administration of hardware systems on site is no longer necessary. Using the example of modelling the cooling behaviour of titanium components during heat treatment, the creation and application of such a tool to reduce creep processes during cooling is demonstrated. Natural convection, heat conduction, radiation exchange and energy release due to microstructural transformation are taken into account in a transient conjugate heat transport analysis. The CFD model was created in OpenFOAM, validated by numerous experiments and optimised in terms of computing time. Both cooling by natural convection in quiescent air and cooling by forced convection in a rapid cooling chamber were considered. For both cases, good agreement was found with the experimentally determined temperature curves and cooling rates [1]. Different phases of the cooling mechanisms could be identified, e.g. periods in which microstructural transformation and radiation dominate the temperature curve could be distinguished from the phase of convection-dominated cooling. The CFD model setup and the simulation process are fully automated: The user provides the geometry as an STL assembly and defines his simulation parameters such as material data, initial temperatures and cooling times. The model is meshed fully automatically and the simulation is carried out. The simulation tool can be used both in-house and in the cloud. An interface to Abaqus for the use of transient temperature fields in creep analyses has been implemented. The tool is used by the end user to optimise the cooling conditions of titanium components in the preliminary design phase. In particular, it was shown that component distortion due to creep is minimised if the component is cooled as evenly as possible on the top and bottom surfaces. In this context, the grate on which the component lies is of particular importance, as a large amount of energy is stored here, which is exchanged with the component over a long period of time through radiation. As a result, the component cools down more slowly on the underside. Numerical simulations can be used to test various measures in advance in order to compensate for these effects and ensure uniform cooling of the component. References [1] Ulrich Heck, Martin Becker, Ralf Paßmann A coupled flow, heat and structural analysis tool to reduce creep during heat treatment of titanium components, Nafems Multiphysics Conferences, Munich 14 - 15 Nov. 2023
14:10
Authored & Presented By Kuangcheng Wu (NSWCCD)
AbstractHigh Performance Computing (HPC) has been recently applied to different scientific and engineering fields and even financial sector. With increasing available and demanding computational resources, (e.g., CPUs, GPUs, memory, storage space, network), more HPC calculations for faster turn-around time are expected in supporting large Finite Element (FE) analyses, design optimization, design space exploration, etc.Simply porting FE solvers from desktops to HPCs and brute forcing executing them at a HPC environment will not fully utilize the HPC’s capability. Software and applications would need to be adapted or refactored if needed to effectively use the available hundreds, thousands, and more processors in a HPC system. This paper presents two advanced numerical techniques combined with HPC to solve structural acoustic analyses accurately and efficiently, especially for large FE models. The first numerical technique is “Finite Element Tearing and Interconnecting (FETI)” which is an iterative solver and is a massively parallel code developed by researchers from Stanford university. It can effectively scale up across hundreds and thousands computing nodes. The other technique is “Adaptive Krylov subspace and Galerkin Projection (AKGP)” which applies user-defined tolerance to speed-up frequency-sweep of a FE model. Instead of calculating frequency response functions (FRF) at every frequency point, the AKGP only needs to calculate a small subset of the original frequency range and use AKGP with the defined tolerance to calculate FRF for the rest of frequency points. FRF is commonly performed in structural acoustics analyses to predict structural responses and identify dynamic characteristics of the underlining structures.Several examples will be presented to demonstrate the benefits of combining HPC with the two advanced techniques in conducting structural acoustic predictions. The accuracy of the advanced techniques is firstly validated by comparing with Abaqus prediction. The efficiency of combining the advanced techniques with HPC will be demonstrated by a large FE model with more than 60M Degree of Freedom (DOF). The model is exercised at the DoD HPC system and utilizes more than 100 nodes to speed up the overall turn around time.
Presented By Sunil Sutar (Dassault Systemes)
Authored By Sunil Sutar (Dassault Systemes)Sachin Ural (Dassault Systemes) Jayant Pawar (Dassault Systemes) Anand Pathak (Dassault Systemes) Karl Desouza (Dassault Systemes)
AbstractThe rising demand for high-performance medical devices, including hypodermic syringes, has created an urgent need for more efficient design and validation methods. Traditional approaches that rely heavily on physical prototyping are often time-consuming and expensive, which significantly limits innovation. However, advancements in machine learning (ML) and simulation-based techniques offer opportunities for optimizing these processes, enabling faster evaluation and improved accuracy in predicting device performance. In this study, we introduce a physics-driven machine learning (ML) model integrated with unified modeling and simulation to evaluate the performance of hypodermic syringes. Our objective is to minimize reliance on physical testing by effectively leveraging virtual prototypes and advanced predictive analytics. We first create a high-fidelity virtual twin of the syringe that complies ISO7886 standard for sustaining force, ensuring an accurate reflection of its physical behavior during operation. Subsequently, we generate comprehensive datasets through various sampling methods as part of a structured design of experiments. These datasets are utilized to rigorously train and test the ML model, which demonstrates impressive accuracy in predicting the sustaining force for new syringe designs, validated by several key performance metrics. Our systematic approach provides a flexible framework to streamline both the design and virtual validation processes for syringes, enabling manufacturers to efficiently adapt to evolving market demands. Additionally, this methodology allows designers to rapidly explore different design options and gain valuable insights through a template-based methodology. Ultimately, this research highlights the immense potential of integrating machine learning with physics-based modeling to transform the syringe design process, paving the way for innovative and efficient solutions in medical device manufacturing. This advancement not only enhances product development timelines but also contributes to improved patient outcomes through the reliable and safe use of hypodermic syringes in clinical settings, illustrating the importance of advanced modeling techniques in modern medical device design and development.
Authored & Presented By Jan Papuga (Czech Technical University)
AbstractFatigue estimation remains a complex and imprecise tool for predicting the behavior of machines and their components under real-world service conditions. The wide range of factors that influence fatigue crack initiation and growth—many of which are often unknown or difficult to quantify—means that fatigue estimation models must inherently account for uncertainty and variability.Over the past century, the challenge of incorporating these factors into reliable fatigue predictions has been extensively studied. However, the solutions developed so far often rely on relatively small datasets, which are now outdated. While these models have been broadly accepted, the verification processes intended to update them with newer data have stagnated. Collecting sufficient new datasets for verification can take years, and few institutions are willing to undertake such lengthy efforts.To address this, the FABER project (Fatigue Benchmark Repository) was launched as a collaborative initiative under the COST (European Cooperation in Science and Technology) framework. This large-scale effort aims to compile curated datasets from existing experimental fatigue studies. These datasets will serve multiple purposes: validating and verifying existing fatigue models, assessing new computational methods, and comparing different approaches to fatigue analysis. A network of researchers and engineers is essential to ensure consensus on the interpretation and application of these datasets.The goal is to aggregate enough data to support the application of advanced methods, such as artificial intelligence (AI) and machine learning, in fatigue analysis. To accelerate the transition from concept to practical application, FABER is also developing an open-source, Python-based library of fatigue analysis tools. This library will allow for rapid adoption, adaptation, and extension by the broader research community. The availability of such a solver will ensure that the fatigue models published in research papers produce the expected results when applied to real-world inputs—something that is not always guaranteed with current methods.This presentation will explore these challenges and objectives, explaining the rationale behind these efforts and outlining the strategy to achieve these ambitious goals.
Authored & Presented By Brant Ross (EnginSoft USA)
AbstractMany different types of physics can be considered in the simulation of robot performance. A robot is a mechatronics device since it is a mechanical system that can move a load from point to point, guided by a sophisticated controller to meet specifications. The accurate prediction of the dynamic performance of a robot requires a mechanical model that correctly represents the flexibility of the components and joints, as well as an accurate representation of the controller system, including the actual controller firmware code where possible. It is also necessary to represent the realistic dynamic performance of the actuators. Robotics in manufacturing, production, and packaging is of interest to a variety of industries, including transportation and mobility, industrial equipment, aerospace and defense, consumer packaged goods and retail, energy, and materials.This presentation will provide background information and an assortment of case studies that show how motion-focused, multiphysics simulation can accurately predict advanced robotic behavior prior to a hardware prototype being built, reducing cost and risk. Discussion topics include: 1) Role and application of rigid body dynamics, 2) Integration with nonlinear controllers, 3) Application of sensors for closed-loop adaptability, 4) Addition of linear and nonlinear flexible bodies, 5) Consideration of gripper flexibility and contact, 6) System-level validation of dynamic behavior, 7) Impact on complexity when considering a mobile robot, and 8) Integration of particle-based CFD for robot that control a fluid jet with significant momentum.Two case studies will be reviewed in detail. The first considers robot placement accuracy and frequency response of the system considering joints and arms with various stiffness. The second case study adds the complication of heavy fluid flow at the end effector on the positioning accuracy during a sweep of the arm. Namely, a substantial flow of liquid exits a tube during the motion of the robot arm. Insights will be gained regarding the effect of the fluid flow on system vibrations and the ability of the arm to traverse the target curve accurately. Robot designs have been simulated for decades, but not with multiple types of physics at the same time. In many cases the simulation results were accurate and useful. However, in some cases, the interactions between the different types of physics were missed, and the robot exhibited problems that were not indicated in the simulation results. The use of multiphysics simulation techniques reduces the probability of missing key design limitations. Using multiphysics (multibody dynamics, nonlinear FEA, controls, and CFD) for realistic simulation. In order to take full advantage of multiphysics simulation, best practices should be followed in creating the models and checking the results.
Biography• Ph.D. degree in mechanical engineering from Brigham Young University. • Licensed professional engineer. • 7.5 years as an engineer at John Deere and General Motors. • 9 years with Mechanical Dynamics (ADAMS multibody dynamics software). • 3 years with FunctionBay. • 16 years with MotionPort (founder, owner, president). • 5 years with EnginSoft USA (acquired MotionPort in 2020). Currently co-founder, was CTO.
Presented By Marco Evangelos Biancolini (RBF Morph)
Authored By Marco Evangelos Biancolini (RBF Morph)Emanuele Di Meo (RBF Morph) Claudio Ponzo (Nissan Technical Centre Europe) Sarwar Ahmed (Nissan Technical Centre Europe) Adam Collins (Nissan Technical Centre Europe) Raff Russo (Nissan Technical Centre Europe)
AbstractThis work presents an innovative collaborative research initiative between Nissan Technical Centre Europe, RBF Morph, and the University of Rome Tor Vergata aimed at advancing the design of automotive road wheels through a multi-physics optimization approach. Road wheels are a critical component where aesthetics, performance, and efficiency converge, making their optimization a multifaceted challenge. The proposed methodology addresses three core requirements. The first is styling, as the wheel’s design must align with consumer appeal and brand identity, making it a key selling point. The second is structural performance, achieved through rigorous finite element analysis (FEA) and experimental validation to ensure compliance with impact and fatigue durability requirements as well as guarantee optimal NVH and handling characteristics. The third is aerodynamics, which is particularly vital for electric vehicles due to the significant impact of spoke design on drag and vehicle range. Computational fluid dynamics (CFD) simulations and wind tunnel tests are integral to this analysis.An integrated computer-aided engineering (CAE) workflow has been developed, allowing design iterations generated from styling considerations to be evaluated against structural and aerodynamic key performance indicators (KPIs). The key enabler of this approach is advanced mesh morphing, which facilitates rapid geometry updates while maintaining high-fidelity simulations. To further accelerate the process, reduced-order models and artificial intelligence are employed to refine designs and achieve superior performance outcomes efficiently. The software platform rbfCAE is adopted to orchestrate shape control across CAE solvers as it allows to adapt CAD defined variations onto computational meshes both for FEA, in this specific case solid models for the NX Nastran solver, and CFD in this specific case the volume mesh for the HELYX solver. The automation can be upfront driven or evolutive. In both cases a set of key shape parameters is defined so that the high fidelity models can be updated. In the upfront drive case a dataset of high fidelity simulations is computed by means of intense HPC automation and snapshots are created. Principal orthogonal decomposition combined with AI is then adopted to compress the multi-physics results and an interactive inspection of the obtained reduced order models can be performed in the rbfCAE UI and/or in its companion rbfVR tool; in the evolutive approach the shapes are computed in sequence and the optimisation process converges toward the optimal design.This study highlights the synergy between advanced simulation tools and cross-disciplinary expertise, offering a robust solution to enhance both the performance and aesthetics of wheels, with a particular emphasis on the unique demands of electric vehicles. The two approaches proposed are compared showing how the innovative AI/ROM, at an higher upfront cost, allows to look for optimal performances whilst controlling the aesthetic results. The more traditional optimisation tool requires less iterations to get better performances but the resulting shape has to be accepted regardless the style requirement.
Authored & Presented By Jiin Jung (Hyundai Motor Group)
AbstractThe complexity of vehicle development is increasing due to diverse power sources and advanced technologies, requiring efficient and flexible systems. The V-model-based target cascading method has limitations in early-stage design due to broad performance requirements and high design variability. This study aims to probabilistically quantify uncertainties in design variables and vehicle performance early in the development process to ensure robustness and reliability. A probabilistic analytical target cascading method using a machine learning model predicts R&H performance variability.In this study, we proposed a target cascading process for stochastic analysis using machine learning models to strengthen R&H performance in the architectural stage. Based on the above process, the correlation between handling performance, K&C characteristics, and design variables was quantitatively identified between finished cars, systems, and parts in the V model. In addition, handling performance and parts were quantitatively identified according to changes in hardpoint and bush rigidity. The uncertainty in the performance of the vehicle was quantified by checking the distribution of K&C characteristics. Based on the above results, the range of front and rear wheel K&C characteristics and the range of design variations of hard point and bush stiffness to improve the stability and responsiveness of the vehicle were determined by applying the optimization technique based on reliability analysis. As the probabilistic analysis target cascading technique was applied to the R&H performance development when developing a vehicle in the architectural stage, it was confirmed that the following expected effects were found.Through reliability-based optimization analysis, we were able to derive an optimal design that could be realized by presenting the distribution range of system characteristics and vehicle performance. In addition, we were able to secure the R&H performance reliability of the platform by identifying the performance distribution of various product groups within the same platform at an early stage. Finally, By quantifying the correlation between system characteristics and vehicle performance and predicting and managing dispersion, data-based efficient decision-making was possible for frequent design changes.Based on this research case, if the probabilistic analysis target cascading process presented in the paper is extended to other performance fields besides R&H performance, it is expected that the efficiency of development work can be increased by strengthening vehicle performance in the early stages of vehicle development.
DescriptionDuring our “Speaking of Simulation Live – Machine Learning” session at the NAFEMS World Congress, we will discuss how Machine Learning (ML) can help engineers to tackle everyday challenges. Experts will share practical examples of ML speeding up simulations and saving time. We will discuss how blending physics-based and data-driven models can offer real benefits, but also examine where such approaches can run into problems.Our speakers will talk about what it takes to prepare teams for ML, handle data concerns, and ensure that new tools are trustworthy. We will aim to highlight cases where ML has worked well, while also looking at issues like black-box models and the risk of overconfidence.Attendees can expect a clear view of how ML can support simulation engineers with tasks such as setting up models or analysing results. We will be realistic about the effort involved in adopting these methods, focusing on why validation and careful planning are so important. This session aims to share real-world experience on how ML can help with practical engineering needs, without losing sight of its limits.We look forward to an open conversation about how to make the best use of ML in engineering simulation.
Presented By Tobias Waffenschmidt (3M Deutschland GmbH)
Authored By Tobias Waffenschmidt (3M Deutschland GmbH)Markus von Hoegen (3M Deutschland GmbH)
AbstractIn many engineering applications, the integrity of adhesive bonds must be ensured over service life when exposed to mechanical stresses. In order to assess the structural integrity of adhesive bonds numerically, there is an increasing need to efficiently model and simulate the strength, damage and failure behavior of adhesives. This includes i) structural adhesives (e.g. curable epoxy-, acrylate-, or polyurethane based adhesives which exhibit thermosetting behavior) but also ii) pressure-sensitive adhesives (adhesive tapes) which behave more elastomeric-like. Pressure-sensitive adhesives, in particular, typically exhibit a highly nonlinear elastic-viscoelastic material behavior including strains at failure of up to 500% or more. This makes a numerical treatment using conventional continuum finite elements difficult if not completely infeasible. One approach to circumvent these deficiencies is cohesive zone modeling. Cohesive zone models make use of constitutive traction-separation laws which enable to incorporate damage and failure mechanisms for adhesives straightforwardly and do not render mesh-dependent results as it would be the case for continuum-based techniques. In particular, the incorporation of the strong nonlinear and rate-dependent response seems to be challenging, because the conventional bilinear traction-separation laws which are available in basically all commercially available finite element software packages are not sufficient to model such complex material behavior. On the other hand, self-implemented user-subroutines which may be used as an alternative are mostly not feasible to be used in an industrial environment due to the high implementation effort, inferior robustness and higher computational cost which mostly prohibits straightforward usage for large-scale simulation problems.This presentation gives an overview of accurate but yet efficient nonlinear cohesive zone modeling techniques suitable for modeling damage and failure for i) structural adhesives and ii) pressure-sensitive adhesives (adhesive tapes) without the need for user subroutines. Suitable testing and characterization methods for both adhesive categories will be presented and compared to each other. Material model calibration and parameter identification techniques based on these tests for cohesive zone models will be discussed for rate-independent and rate-dependent use cases. Verification and validation test cases will be discussed to underline the applicability of these models. Finally, a variety of different application cases ranging from quasi-static to impact scenarios will be presented.
Authored & Presented By Tom Deighan (UK Atomic Energy Authority)
AbstractDevelopment of commercially viable fusion power will rely heavily on deployment and targeted development of appropriate simulation methods for use across the design lifecycle and extended use into operations. Tools and methods of an appropriate fidelity are needed to efficiently explore the vast array of potential concepts at an early stage. Performing whole system x-in-the-loop virtual operations are needed for upfront and integrated development of control systems, facility HMI design, operation planning and operator training. This is particularly pertinent for Fusion, where doing so through physical prototype systems is either not possible or has prohibitive costs and timescales. Additionally, successful lifetime monitoring and predictive maintenance of fusion components through diagnostic measurements will be limited due to restricted accessibility and operation in a harsh environment. This provides a use case for a digital twin, which can combine data from the physical instrumentation with simulation to provide enhanced augmented diagnostics in ‘real-time’. For all such simulations to be valuable in assessing the design or operational risk they need to be performed probabilistically, to quantify the uncertainty in the predictions in a formal reliability analysis. Systems simulation and related novel reduced order modelling techniques provides a realistic approach to achieving these aims. Although constantly advancing computational capability opens the door for larger and more complex simulations, the environmental and financial cost must be considered and does not override the principle of “appropriate fidelity” and the development of efficient techniques – even if to enable more valuable deployment of computational resource for efficient UQ. This paper presents developments of a novel full-field reduced order modelling technique using an augmented Component Mode Synthesis (CMS) reduction and modal coupling method, describing the reduction process and implementation in Modelica language. The approach enables efficient simulation of coupled fluid-thermo-mechanical models of complex components within a systems environment, capturing aspects of non-linear behaviour. This is demonstrated through application for a coupled fluid-thermal-structural simulation of a Fusion power plant plasma facing component, discussing the advantages and limitations of the approach. Finally, plans for further development of these methods and the application for simulation of Fusion systems and in wider industry are discussed in the context of moving towards realisation of a probabilistic real-time digital twin.
Presented By Robert Nedelik (Siemens Mobility Austria GmbH)
Authored By Robert Nedelik (Siemens Mobility Austria GmbH)Andreas Ruthmeier (Siemens Mobility Austria) Markus Seitzberger (Siemens Mobility Austria)
AbstractMass transit (metro) rail vehicles must be lightweight to optimize payload and energy efficiency. Mass reduction measures must be implemented in the typically used welded aluminum carbody to meet cost targets and development schedule constraints.To achieve a comprehensive mass reduction of a carbody design, we follow a rigorous CAE-driven design approach starting with topology optimization covering the entire carbody structure. Knowing that the body is a design with a structural frame and a thin, load-bearing skin, both are implemented in a combined model for the topology optimization definition. This approach has a significant impact on the results for the optimized structure and offers a higher potential for mass reduction compared to classical volume-based topology optimization approaches.The second key ingredient for an appropriate optimization result is the selection of the acting load cases and their respective mass impact on the overall result. A representative selection of design relevant loads must be derived from the wide range of service loads on the real carbody.However, the main step from topology optimization to a feasible carbody design is critical and the approach must be based on profound experience in the field of mechanical engineering also considering the present manufacturing capabilities and total cost.Manufacturing costs, infrastructure and development timeline dictate the goal of using a classic aluminum chassis and proven widely accepted welding technology as the basis. Aluminum integral design using large extrusion profiles is well established in the rail industry because it provides high degree of automated manufacturing. The disadvantage of this technology is the relatively high mass of the carbody compared to sheet metal and frame designs, the so-called differential design. However, differential design is typically not as cost effective due to the high labor cost of manual welding required.The goal is to reduce the mass of the integral design without uneconomically increasing the cost. This requires highly automated manufacturing techniques such as milling, which can be combined with the automated welding of the extrusions. The process implemented is called the quasi-differential design approach.The carbody based on the quasi-differential design must withstand all static and fatigue design loads resulting from rail standards and/or multi-body system simulation results. The car behavior is simulated by Finite Element Analysis with load case superposition, which provides the amplitudes of the acting stress components for each location in the structure. The weld definition in the FEA model provides the allowable notch case in each direction relative to the local weld direction. These post-processing steps provides the utilization for each part of the carbody structure.To verify the concept, a metro carbody currently in production is used as the basis for the new design approach. The resulting structure, using the same design space and loads, results in a structure with twenty percent less mass. This considerable mass reduction is reached by consequent (CAE-based) structural lightweight design only - without any change of material or manufacturing technology usually applied for modern aluminum carbody production.A carbody prototype was manufactured to verify the technical concept and the cost predictions. Finally, the static type test program was successfully completed, fully confirming the huge potential of this innovative design approach for aluminum carbody structures.
Authored & Presented By Hoyoung Lee (Hyundai Motor Company)
AbstractMetal 3D printing processes, such as Selective Laser Melting (SLM) and Electron Beam Melting (EBM), exhibit unique characteristics due to their rapid solidification rates, high cooling rates, and localized heat accumulation during layer-by-layer fabrication. These extreme thermal gradients contribute to the formation of complex and heterogeneous microstructures, including cellular dendritic structures, varying grain sizes, and textures, leading to anisotropic and localized variations in mechanical properties. Such microstructural complexities pose significant challenges to the analytical validation and predictive modeling of parts manufactured through 3D printing, particularly in critical applications like the automotive industry, where reliability, safety, and high performance are paramount.To address these challenges, this study utilizes a crystal plasticity finite element method (CPFEM) to predict the mechanical properties of metal 3D-printed components with greater accuracy. The CPFEM model incorporates distinct microstructural features formed during the metal 3D printing process, such as the cellular structures, grain boundary characteristics, and crystallographic textures, directly into the crystal plasticity framework. By doing so, the model captures the anisotropic mechanical behavior and localized property variations inherent in 3D-printed metals, which are often overlooked in conventional finite element analyses that assume homogenous material properties.The computational framework developed in this research includes detailed characterization of the microstructure using advanced imaging techniques, such as electron backscatter diffraction (EBSD) and transmission electron microscopy (TEM), to inform the CPFEM simulations. The model is validated against experimental mechanical testing data, including tensile tests and hardness measurements, to ensure its predictive capability.The findings of this research highlight the significant potential of CPFEM in bridging the gap between the microstructural complexities of 3D-printed materials and their macroscopic mechanical behavior. This approach is particularly relevant for automotive applications, where understanding the mechanical anisotropy, residual stresses, and localized property variations is essential for optimizing the design, performance, and longevity of critical components. By providing a robust and comprehensive methodology for numerical analysis, this study contributes to advancing the use of metal additive manufacturing in the automotive industry. It enables the development of reliable, high-performance parts that are tailored to specific requirements, ultimately pushing the boundaries of design possibilities and contributing to lighter, more efficient vehicles.
Presented By Petr Nekolny (Valeo)
Authored By Petr Nekolny (Valeo)Radek Lohonka (Valeo Autoklimatizace K.S.)
AbstractIn personal vehicles, HVAC units are crucial for balancing energy efficiency with passenger comfort, especially as vehicle designs increasingly prioritize fuel efficiency, electrification, and emissions reduction. The performance of HVAC units directly impacts these objectives, making their optimization a priority in modern automotive engineering. Computational Fluid Dynamics (CFD) has emerged as an essential tool, enabling engineers to simulate, analyze, and refine the complex airflow and thermal dynamics within HVAC units.CFD empowers engineers to model intricate fluid and thermal behaviors within HVAC units, facilitating precise adjustments to critical components such as fans, ducts, and heat exchangers. By identifying and addressing inefficiencies during the design phase, CFD-based approaches reduce energy consumption and enhance HVAC unit reliability under diverse driving and environmental conditions. For electric and hybrid vehicles, where HVAC energy consumption significantly affects battery range, CFD optimizations are indispensable. Similarly, in conventional vehicles, improved HVAC efficiency reduces fuel consumption and emissions, supporting global sustainability goals.Beyond energy efficiency, CFD plays a pivotal role in enhancing passenger comfort. By simulating airflow distribution, velocity, and temperature gradients within the vehicle cabin, engineers can design HVAC units that ensure uniform, comfortable conditions for all occupants without excessive energy use. This capability is essential for maintaining occupant satisfaction across a broad spectrum of vehicle types and climatic conditions.CFD also facilitates the integration of advanced technologies, such as heat pumps and next-generation thermal management systems, which are increasingly utilized in electric and hybrid vehicles for efficient heating and cooling. Through virtual testing, CFD allows engineers to optimize these technologies, ensuring effective performance while minimizing energy demand.This paper presents case studies and practical applications of CFD in improving the efficiency, comfort, and adaptability of HVAC units across various personal vehicle types. It underscores the transformative impact of CFD in designing innovative, high-performance HVAC solutions that align with the demands of modern personal transportation.
Authored By Oliver Found (TWI North East)Julian Dean (The University of Sheffield)
AbstractRare-earth permanent magnets, such as Neodymium-Iron-Boron, are essential in any motor and generator application. These materials, however, face two significant challenges: susceptibility to demagnetisation, particularly at edges and corners, and reliance on resource-intensive rare-earth elements. The magnet demagnetising over successive cycles of an externally applied magnetic field weakens the magnets over time, degrading their performance, while the global scarcity and high costs of rare-earth materials raise sustainability concerns. Addressing these issues requires innovative approaches to maintain magnetic performance while reducing rare-earth dependency. One established method is through grain-boundary diffusion. Dysprosium (Dy), a rare-earth element, is introduced to improve coercivity by reducing the demagnetisation field along the grain boundaries and magnet surfaces and corners. Despite its effectiveness, this process increases the use of rare-earth elements, exacerbating supply chain challenges. As such, alternative strategies that achieve similar outcomes without additional rare-earth content must be explored. The work presented here develops and uses a modelling framework to investigate alternative magnetic architectures designed to mitigate demagnetisation without increasing rare-earth content. We first start by developing a 1D model to provide an analysis of cylindrical magnets with varying magnetic profiles. This provides the capability to explore the trade-off between the demagnetisation field and the external field produced by the magnet and shows that certain architectures replicate the effects of reduced magnetisation without relying on introducing Dy. The results are then used to provide validation and design criteria for simulations using COMSOL Multiphysics. The 3D simulations reveal the versatility of non-linear magnetic profiles across three axes, offering significant improvements in magnetic performance. Specifically, applying non-linear magnetic property profiles within the magnets shows a reduction in the demagnetisation field at critical regions, such as edges and corners, with minimal compromise to the external field strength. This model approach also allows us to further optimise the magnetic architecture by concentrating demagnetisation resistance in regions where it is most needed. These advancements enable the design of magnets with reduced rare-earth content, lowering production costs and enhancing sustainability. Additionally, such magnets allow for more efficient use of rare-earth elements, producing a greater number of magnets from the same resource quantity. This work contributes to the development of next-generation permanent magnets, balancing external field performance and demagnetisation resistance while reducing environmental impact. The findings offer a pathway to sustainable magnetic material design, supporting the broader goal of advancing resource-efficient and environmentally responsible technologies.
Authored & Presented By Frank Günther (Knorr-Bremse SfS GmbH)
Authored & Presented By Jochen Kinzig (Cenit)
AbstractFor the robust and reliable design of products, the influence of tolerances must already be taken into account during the design process. Here it is important to have simulation methods available that not only map the function on the basis of the nominal geometry, but also take the influence of tolerances into account. Statistical tolerancing results in a propability distribution of the resulting variants and thus also a probability of failure if it is possible to simulate each of the possible variants with regard to function (e.g. strength, acoustics, etc.). However, this is where the challenge lies, as several thousand variants have to be considered for statistical tolerancing. With conventional simulation methods (FEM, MBS, etc.), standard computers and average model sizes, the calculation of all tolerance situations would take several months. However, if the tolerances are not taken into account, there is a risk that the product will not be reliable under certain tolerance combinations and that unexpected failures will occur. In order to include the statistical tolerance analysis in the functional analysis and thus ensure the reliability of the product, an alternative simulation method must be used. Machine learning methods can be used here to create prediction models. These prediction models deliver results in real time and are therefore very well suited to carrying out a very large number of analyses in a short time. The training data for machine learning is usually created using a design of experiments in combination with automated simulation of variants. Using the example of a gearbox with two gears, the presentation shows how the simulation models can be prepared for such an automation in order to ensure safe and reliable execution of the automatic process, even with more complex simulation models, like the contact model of the gear pair. The predictive quality of the machine learning model is examined and evaluated. The time advantage resulting from the use of machine learning model is also evaluated. In the example shown, the strength assessment for a statistical tolerance analysis with 10,000 variants can be reduced from 140 days to only 4 days.
BiographyJochen Kinzig is simulation consultant at CENIT AG. He studied mechanical engineering at the University of Karlsruhe. After graduation he got into optimization and data analysis while working for the company FE-Design. In his following work at Schaeffler he built knowledge in EMC and Simulation Driven Design. At CENIT AG he is working in the service team for Simulia.
Presented By Patrick Wurm (Magna Steyr Fahrzeugtechnik)
Authored By Patrick Wurm (Magna Steyr Fahrzeugtechnik)Iris Hehn (Magna Steyr Fahrzeugtechnik GmbH & Co KG)Alexander Rabofsky (Magna Steyr Fahrzeugtechnik GmbH & Co KG)
AbstractThe accurate and efficient simulation of the various connections inside a vehicle body is crucial for achieving precise results in the FEM analysis, particularly in terms of stiffness, fatigue, strength, and vehicle safety. To ensure accuracy and efficiency, three key factors are essential: 1) accurate input data, 2) suitable connection models for each discipline, and 3) seamless interchangeability between the disciplines.Accurate and comprehensive input data is a necessary prerequisite for any simulation. Therefore, accurately transferring the joining information from Computer-Aided Design (CAD) to Computer-Aided Engineering (CAE) is critical. To this end, Magna Steyr has developed a specialized process and software tools to ensure the data consistency between CAD, CAE, and even production, minimizing the risk of discrepancies.When considering the numerical connection models used in simulations, it is essential to recognize that different disciplines often require different models. For durability analysis, precise stress computation is crucial for predicting fatigue life, as errors in stress calculation are magnified by the inclination of the Basquin-Curve. In crash simulations, accurate prediction of failure behavior in a variety of different load situations is essential for predicting the system's response during a crash event.To facilitate efficient collaboration and maintain consistency, stiffness/durability analysis and crash simulation utilize a common baseline mesh for the vehicle structure and employ mesh-independent connection models on top. This approach not only streamlines the simulation process but also allows for efficient iterations and conflict resolution, ensuring accuracy in predictions. To showcase the accurate and efficient simulation of vehicle body connections, this contribution will demonstrate Magna Steyr's approach to connection treatment in simulation. This demonstration will exemplify the complete process from CAD to CAE to the final validation using a real-life example of a vehicle body. The case study will provide valuable insights into how these processes and tools are applied in practice, offering a comprehensive view of the advantages and improvements achieved in vehicle body simulations.
Biography2010-2015: Graz University of Technology, Austria: Master of Science in Mechanical Engineering (with honors) 2015-2021: Graz University of Technology, Austria: Doctorate in the field of Mechanical Multiscale Simulation (Atomistic/Continuum Coupling) 2021-present: MAGNA Steyr Fahrzeugtechnik - Development of methods and software for durability simulation, RnD
AbstractEnsuring an acceptable vehicle range, a good passenger thermal comfort whilst ensuring the safety of electric component is a delicate trade-off in term of system architecture, component sizing and control logics definition.This paper presents a comprehensive study on the energy and thermal management of electric vehicles, utilizing a full-fidelity system model executed in Simcenter Amesim. The focus is on the integration of high-fidelity subsystem models, including battery, cabin and electric motor, within various drive cycles to identify and address real-world thermal management challenges, particularly those experienced in extreme environments. Current issues include the prioritization of battery cooling over cabin climate control, leading to suboptimal passenger comfort.The paper explores the integration of 3D simulation results for enhanced fidelity. First, the objective is to leverage the accuracy of computational fluid dynamics (CFD) models of cabin thermal management. An innovative methodology to generate the Reduced Order Model is presented. It consists of a step-by-step process: importing the geometry and results from the existing CFD simulation files in Simcenter STAR-CCM+, choosing the desired spatial discretization by adding cutting planes, and then mapping all 3D CFD results onto the model.Secondly, an accurate model of the electric drive is generated to consider electromagnetic and thermal aspects. Starting from an electric machine model developed with Simcenter E-Machine Design, maps of performance and losses as well as a Lumped Parameter Thermal Network (LPTN) model are generated and exported. The system-level model benefits then from an accurate performance/energy consumption model and a realistic temperature estimation of the main motor components during transient cycles.The full vehicle model is used on driving cycles in countries with extreme temperature conditions: 40°C and 2000 W/m² solar load in Saudi Arabia and -20°C and night conditions in Alaska.This integrated vehicle energy management/thermal management framework offers significant advancements in simulation accuracy while maintaining fast computation times (faster than real-time). This innovative workflow provides valuable insights into system performance under challenging conditions and enables more effective strategies in electric vehicles.
Authored & Presented By Nils Wagner (INTES GmbH)
AbstractThis paper investigates the concept of generative design, focusing on the integration of finite element models and topology optimization techniques to achieve optimal structural performance under specific constraints. The study emphasizes the dual objectives of adhering to weight restrictions and maximizing stiffness in the design of lightweight structures. Importantly, the generative design process can originate from either a CAD model or a finite element model, providing flexibility in design development. By employing FEM, engineers can accurately assess structural behaviour under various loading conditions, informing the optimization process. The study also incorporates critical constraints, including global stress limits and load factors derived from linear buckling analysis, to ensure structural integrity. The application of user-friendly wizards further supports the engineer throughout the design process, guiding them in setting parameters and constraints effectively.Herein, topology optimization plays a pivotal role, enabling the creation of innovative designs that satisfy stringent weight criteria while ensuring maximum rigidity. PERMAS is used to solve the topology optimization problems, whereas optimization-relevant data is generated in VisPER via the so-called TopoWizard. The methodology is demonstrated using selected examples from the literature. The result of the topology optimisation is a filling ratio distribution within the design space. A clear solid/void distribution is required to interpret the results. Using the Design Wizard, the hull of the optimized design space can be smoothed, a quad layout generated, CAD-compatible splines fitted and exported as a step file. Furthermore, quadrilateral layouts on surfaces are essential for defining quadrilateral meshes, for fitting splines for engineering design and analysis. Additionally, the paper discusses the significance of the STEP file format for CAD export, enhancing interoperability and collaboration among design teams compared to the previously used .stl format. Through detailed case studies, this research also illustrates how these methodologies can streamline the product development process, leading to advanced structural solutions that balance performance, weight efficiency, and manufacturability. The findings highlight the transformative impact of generative design on modern engineering practices, enabling the creation of high-performance components tailored to specific application requirements. This significantly reduces development times in the product creation process and avoids “manual” optimization steps due to the otherwise usual large number of CAD variants at the start of a new project.[1] Hongyuan Ren , Bo Xia, Wenrui Wang, Xueqian Chen, Yang Liu, Pingzhang Zhou, Jianbin Du: “AMRTO: Automated CAD model reconstruction of topology optimization result”, Computer Methods in Applied Mechanics and Engineering 435 (2025) 117673, https://doi.org/10.1016/j.cma.2024.117673[2] Kendrick M. Shepherd, Rene R. Hiemstra, Thomas J.R. Hughes: “The quad layout immersion: A mathematically equivalent representation of a surface quadrilateral layout”, Computer Methods in Applied Mechanics and Engineering, Volume 417, Part B, 15 December (2023), 116445 https://doi.org/10.1016/j.cma.2023.116445[3] Zhi Li, Ting-Uei Lee, Yi Min Xie: “Interactive 3D structural design in virtual reality using preference-based topology optimization”, Computer-Aided Design, Vol. 180 (2025), https://doi.org/10.1016/j.cad.2024.103826
Presented By Uwe Diekmann (Matplus)
Authored By Uwe Diekmann (Matplus)Frederik Klokkers (Porsche AG) Helge Liebertz (Volkswagen AG) Stephanie Herreth (Audi AG) Thies Marwitz (Matplus GmbH)
AbstractMaterial data consistency across computer-aided engineering (CAE) platforms is vital for optimizing design, safety, and performance in automotive applications. Managing material data for CAE poses several challenges, including data harmonization across diverse simulation tools, ensuring accuracy in material models, and maintaining regulatory compliance while meeting the dynamic needs of multiple brands. This presentation outlines the challenges faced, the innovative solutions implemented, and the significant benefits realized in managing material data for CAE applications.Automotive manufacturers operating in multi-CAE environments face significant obstacles. Data fragmentation is a major issue, as material test data originates from various laboratories, leading to inconsistencies and inefficiencies in aggregation. Accurate representation of material behavior under varying strain rates, temperatures, and stress states is another critical challenge, requiring precise model calibration. Regulatory compliance demands stringent management of material release and withdrawal processes, tailored to brand-specific needs while adhering to industry standards. Additionally, achieving vendor independence is essential for long-term sustainability, reducing reliance on proprietary software.The unified Material Master System was extended to address these challenges through innovative strategies. Standardized database representations of material cards were developed to ensure compatibility across platforms such as Pamcrash, LS-Dyna, and Abaqus. These universal cards are linked via bidirectional interfaces that streamline data flows and enhance cross-platform consistency. A centralized platform was implemented to integrate raw and processed test data, including key properties such as plasticity, damage, and fatigue. This system improves data retrieval and validation efficiency.Open-source methodologies, such as the use of the SciPy library, were incorporated to fit constitutive equations like Johnson-Cook and Zerilli-Armstrong models. This enabled precise calibration and parametrization of material models for simulations. Automated tools were developed to generate piecewise input data for CAE applications, reducing manual intervention and improving integration efficiency. To ensure robust data governance, Business Process Model and Notation (BPMN) workflows were implemented, standardizing material data processes and ensuring compliance across brands.The extended Material Master System at Volkswagen Group delivers transformative benefits. The unified approach fosters collaboration among multiple brands, facilitating cross-brand synergies while accommodating individual requirements. Streamlined workflows significantly accelerate applicability of new materials, enhancing operational efficiency. Harmonization of validations and data improve the reliability of simulations, ensuring higher accuracy. The incorporation of open-source tools provides flexibility and scalability to adapt to evolving requirements, while vendor independence and a centralized system support sustainable, long-term material data management.This presentation offers an overview of the challenges, solutions, and benefits involved in implementing a multi-CAE material management system. It provides actionable insights for industries seeking data harmonization, improved CAE integration, and sustainable material data management practices.
Presented By Pranav Shinde (Revolta Motors)
Authored By Pranav Shinde (Revolta Motors)Karthik Balachandran (Dassault Systemes) Srikrishna Chittur (Dassault Systemes) Alok Das (Revolta Motors Pvt Ltd, Qargos)
AbstractCross-wind aerodynamic analysis for two wheelers becomes important from the stability stand point both during the vehicle ride as well as in parked condition. In the present work we consider the effect of cross-wind forces on two design variants of the Qargos Electric cargo scooter. The first design has a higher cargo capacity but is limited to one rider and the second variant has a limited cargo capacity but can have a rider and pillion. In both these designs, the front cargo compartment, situated between the two wheels and enclosed within body panels, increases the vehicle's surface area, potentially having an impact of crosswind forces on the scooter and rider(s). Understanding the aerodynamic impact is crucial to ensure the scooter's safety and rider comfort. In the present work we study the effect of cross-wind the stability of the two vehicle variants. In addition to estimating crosswind forces using CFD analysis, we also capture the forces due to wind on a center stand parked vehicle. This study is important from the perspective of fragile cargo getting damaged due to vehicle toppling. Cross-wind aerodynamic analysis using Computational Fluid Dynamics (CFD) can be both time and cost saving strategy for 2 wheeler vehicle developers. Wind tunnel testing can be prohibitively expensive and in many cases may not have a wide enough cross-section to accommodate a full two wheeler including the mannequin and would have to resort to using scaled models. The vehicle can be subjected to cross wind velocities due to the terrain in some instances and due to the passage of a large truck on a high way in most other cases. Although the extreme case of the vehicle rider falling due to cross-wind is rare and few, the short instability of being pushed towards one side can have a scary and de-stabilizing effect on the rider. Since, aerodynamic forces typically scale with the square of velocities, one strategy the rider could adapt would be to reduce the vehicle speeds drastically which may not always be a good idea when driving on freeways. Due to the unique configuration of the present vehicle, we expect the side forces to be significant in magnitude as the side section view shows a fully covered space. There are numerous literature available that deals with both experimental research and Numerical studies of bicycles with rider, Motor cyclists, scooters, cars and trucks. In all these cases, the main reason for studies is indicated by rider discomfort in the case of two wheelers and steering corrections needed in the case of 4 wheelers.The first part of the study includes studying the effect of crosswind for two vehicle designs including the rider(s). RANS approach is utilized the present study with SST –KW model. The second part of the study is about a vehicle parked on its center stand and subject to high wind velocities typically encountered during windy days. This study becomes important from the perspective of the nature of cargo. In some cases the cargo could be some fragile material that could be damaged should the vehicle topple. The present study is focused on these two aspects of the electric cargo scooter.
BiographyPranav Shinde has master’s in manufacturing engineering from VIT University and bachelor’s degree in mechanical engineering from Pune University. Currently he is associated with Qargos as Lead Engineer in developing F9 Cargo Scooter Platform. His research interests include simulation and modelling, design optimisation and material science. He has published articles on computational fluid dynamics, light weight BIW solutions and electric vehicle battery and powertrain design.
Authored By Wolfgang Krach (CAE Simulation & Solutions Maschinenbau Ingenieurdienstleistungen)Johan Malm (RISE)
AbstractThe usage of carbon-fibre reinforced composites (CFRP) to achieve lightweight structures has further spread throughout the last years in machinery and aerospace industry. The lifetime prediction and structural health monitoring (SHM) of these structures is necessary and costly. The usage of magnetic microwires embedded within the CFRP is aimed to enable wireless SHM of the components in service. In our setup, cobalt-rich magnetic microwires covered by a glass coating are used within noncrimp fibre (NCF) composites. These magnetic microwires change their electromagnetic properties while being thermally and/or mechanically loaded. This change can be detected and measured wirelessly using a handheld reader system. To achieve a robust signal, electromagnetic simulations are carried out in parallel to experimental work to understand the physical basics of the interaction between CFRP and the metallic microwires regarding electromagnetic fields. Unidirectional CFRP shows high orthotropy regarding electromagnetic properties. Industrially far more relevant multi-ply multi-directional carbon composites exhibit a quasiisotropic electromagnetic behaviour, which is favourable to the accuracy of the measurements. To achieve a good signal a frequency range from MHz to GHz is investigated for evaluating the best signal to noise ration. A comparison of the attenuation at 50 MHz and 2.45 GHz shows good correlation between simulations and measurements. The signal strength difference is 10^3 (!)To check for practical industrial usability, mechanical simulations (FEM) are performed to ensure that the measuring range covers the generally used allowable strains of up to 0.4%. While accounting for the thermal mismatch in the production process, the orthotropic mechanical material properties of multi-ply CFRP and microwires as well as the rupture strains of the microwires, the simulations show that measuring ranges of up to 1.5% are to be expected.Simulating different failure scenarios like delamination, cracks, dents or penetration were carried out to differentiate the signal resulting from different failure scenarios and sizes.The results indicate that metallic microwires can be used for SHM. Additional work will be carried out to achieve an advanced differentiation of the different failure scenarios. The portable reader will be redesigned to achieve more robust signals and to allow a quantitative signal assessment regarding different failure modes of CFRP.The work is carried out within the Horizon Europe framework (HORIZON-CL5-2021-D5-01) “INFINITE - Aerospace Composites digitally sensorised from manufacturing to end-of-life”
Presented By Roland Niemeier (Ansys Germany)
Authored By Roland Niemeier (Ansys Germany)Guenther Hasna (Ansys Germany GmbH) Sebastian Hoetzel (Ansys Germany GmbH)
AbstractMetamodels of Failure Probabilities (Fragility Surfaces) make it possible to explore the transition region between failure and non-failure, not only in dependence of for example the required lifetime, but also in dependence of other important parameters like maximum temperature of a thermal load cycle, etc. This approach gives new insights on Wöhler- or Weibull-like curves, surfaces. For example, the Wöhler-like curves, surfaces are the isolines of the Fragility Surfaces. The Fragility Surfaces have many possible cross domain applications, for example in Prognostics and Health Management based on digital twins. We will focus on several of these cross domain applications.Qualitative and quantitative failure analysis is important especially for the reliability of components or systems. The information about the transition region from non-failure to failure is a critical information. This information should be passed along the value chain while keeping IP non-disclosed. Metamodels provide the possibility to exchange this information without exchanging other more IP related information about the simulations.Usually, a huge number of simulations is necessary to study the transition region between failure and non-failure, if we take tolerances into account. Fragility Surfaces opens a door to a quantitative approach for a more detailed discussion and examination of the transition region between failure and non-failure enabling many cross-domain applications with a strongly reduced number of necessary simulations. We made new workflows that can be used for creating Fragility Surfaces. Especially the methodology of nested workflows is very useful. In the inner loop we run robustness analysis with stochastic parameters (tolerances for example) and in the outer loop we have the control and requirement parameters. The control parameters are typically the most important parameters (maximum temperature, etc.) and a requirement parameter can be a minimum requested lifetime.With that developed methodology we are now able to apply some insights, techniques from robust design optimization in a very flexible way giving possibilities to a lot of new applications especially using Fragility Surfaces. We can use this methodology to understand the transition region between failures and non-failures in an early phase of the development based on quantitative failure analysis. The new workflows finally enable the robustness analysis of designs using fragility surfaces, for example helping to ensure that optimized designs will fulfil their required reliability to a high degree of certainty.
Presented By Mark Oliver (Veryst Engineering)
Authored By Mark Oliver (Veryst Engineering)Scott Grindy (Veryst Engineering)
AbstractCohesive zone modeling is a powerful tool for simulating the failure of adhesive joints and adhesively bonded interfaces. Accurate simulation of adhesive or interface failure with cohesive zone modeling requires simulation engineers to define traction-separation laws that describe the stiffness, strength, and toughness of the adhesive or interface under tension, shear, and mixed-mode loading. These parameters are not generally available from data sheets or other references and so the CZM parameters have to be determined experimentally on a case-by-case basis. Calibration of cohesive zone models is often done using standard test specimens to measure the strength and toughness of the adhesive or interface. Standard fracture mechanics tests specimens can also be used to directly measure the traction-separation law for an adhesive bond. This works particularly well for ductile structural adhesives. However, in many cases, the use of standard test specimens or established CZM calibration methods is not possible. Cases like this arise when there are size constraints on material samples or if the materials being bonded are prone to large inelastic deformation during testing. Also, standard test methods are often not applicable for cases of high-rate or impact loading. When standard test methods aren’t applicable, inverse calibration can be used to determine CZM parameters. Inverse calibration involves designing custom mechanical test specimens and loading fixtures along with using FEA to simulate the test. The CZM parameters are then iteratively solved for until good agreement between the measured and simulated adhesive or interface failure are in good agreement. In this presentation, we will discuss inverse calibration strategies for cohesive zone modeling that we have successfully employed for a variety of material systems. We will focus on addressing two commonly encountered scenarios: 1) when the materials being bonded are thin and 2) when the adhesive is being loaded under impact conditions.This talk will be of interest to simulation engineers responsible for modeling the failure of adhesive joints or bonded interfaces and are interested in new approaches to calibrate cohesive zone models.
Presented By Markus Wagner (Ansys Deutschland GmbH)
Authored By Markus Wagner (Ansys Deutschland GmbH)Siby Mandapathil (Ansys) Vishnu Venkataraman (Ansys) Martin Husek (Ansys)
AbstractElectronics systems are becoming more and more compact and require sophisticated optimization techniques to achieve optimal performance. The growing complexity of the design space makes it very difficult for engineers to identify the key design variables that need to be optimized. One state of the art approach is to apply parametric optimization procedure to the thermomechanical setup of an electronics system. This includes an automatic workflow calling different software tools in batch and starts with generating a geometry based on the parameter values of a single design out of a possible sampling scheme. For each design the geometry is consumed by the following CFD and FEA analysis. Using this process, you can optimize your design by using scalar responses of the simulation e.g. for minimal heat sink mass or thermal stress while constraining the temperature within a given limit. Conducting a constrained Multi-Objective Optimization leads to hundreds or even thousands of design evaluations and thus to a high numerical effort. In order to keep the numerical effort reasonable, 0D (scalar) ROMs are used that approximate the responses within the design space of the design variables. With sufficient quality, they can be used for (pre-) optimization where thousands of designs and different optimization runs can be realized quickly. This allows us in several steps to convert the Multi-Objective Optimization task into a Single- Objective Optimization by summarizing positive correlated objectives and formulating preferences based on the results.One key element in this approach is the quality of the ROMs. Traditionally regression models like Polynomial Methods, Moving Least Squares or Kriging are used to set them up. Nowadays ROMs can be generated using machine learning techniques like Deep Feed Forward Networks or even mixed approaches that combine Neural Network approximation with traditional models like Kriging.The described optimization methodology is applied to a optimization of a circuit board design. In total 12 Parameters are defined in the parametric model including number of fins and fin thicknesses of both heatsinks, fan flow rates and fan locations. The analysis starts with a Multi-Objective Optimization to reduce the thermal resistance of both heat sinks, pressure loss, thermal stress and deformation on board. At the same time constraining the heat sink mass and temperatures of the heat sources within given limits. Positive correlated objectives will be summarized into one singe objective and another objective is converted to constraint. This approach with be conducted using traditional ROMs and machine learning ROMs. Finally, we will discuss these ROMs and approaches for an optimal way to find the best designs.
Presented By Thomas Reiher (Hexagon Manufacturing Intelligence GmbH)
Authored By Thomas Reiher (Hexagon Manufacturing Intelligence GmbH)Hemant Patel (Hexagon) Xiaoming Yu (Hexagon)
AbstractIn this presentation, we introduce a novel approach in MSC Nastran that integrates generative design principles directly within the topology optimization algorithm, streamlining the optimization process by producing designs that are much closer to the final production requirements and eliminating user-based decisions on where to add or remove material – decisions that require extensive expertise and knowledge of the manufacturing processes. A new, voxel based engineering approach has been developed and integrated via a submodelling approach. A defined design space inside a complex assembly is identified via one unique property ID and a submodel automatically is set up including boundary conditions and optimization parameters. The underlying optimization algorithm itself creates a new voxel mesh which is refined depending on optimization progress up to extremely high resolutions. Using an engineering optimization approach with a hard-kill BESO algorithm instead of the well-known mathematical density based approach enables a clear geometry definition in each iteration including a high quality tet-mesh recreation after submodel optimization finished. This near-net shaped design candidate is transferred back to the original assembly and automatically validated in full assembly context by MSC Nastran. Due to the engineering approach of the algorithm, design constraints are much clearer covered with e.g. exact representation of tapering angles for casting with 3° or more or less. With this approach, the optimization result is not just a blurry idea of main load paths, but an actual design candidate which can be used downstream in the development process without the need for a manual CAD recreation. This new method reduces computational time (CPU & GPU accelerated) and engineering cycles significantly, enabling users to solve larger and more complex models than was previously possible. This opens up the path to design iterations and rapid prototyping of both components and full assemblies for both linear and non-linear simulation analyses. We will demonstrate all developments using representative industry-relevant examples.Take Aways:A novel approach in MSC Nastran that integrates generative design principles directly within the topology optimization algorithm, streamlining the optimization process by producing designs that are much closer to the final production requirementsHow to accelerate design iterations and prototyping in topology optimization applications leading to significant productivity improvementsApplication of the novel generative design technology to solve larger and more complex industrial models for both linear and nonlinear simulationsShifting use of topology optimization from design idea generation to actual geometry creation tool as a copilot for optimal part design generation.
Presented By Alessia Perilli (FVmat)
Authored By Alessia Perilli (FVmat)Doron Klepach (FVMat)
AbstractMeta-Materials (MMs) pose a huge potential for providing solutions to advanced engineering problems and also opening a new playground for cutting-edge high-performance parts and systems. However, developing MMs and integrating them into everyday R&D processes is a big challenge. It requires know-how and expertise in MMs and computational physics. The design and simulation of systems consist of millions of unit cells (basic blocks of MM microstructures), making simulation times almost non-feasible. Optimization of such systems becomes impossible.We develop a methodology that addresses the challenges above and provides an R&D approach for MM design. We also present novel MMs with properties that do not exist in nature.We develop two MM categories. “Conventional” MMs are materials with extraordinary properties. They are made from common engineering materials, and their properties are engineered by the design of their micro-structures. Mostly used today in optics, electromagnetics, and acoustics. Second category: Dynamic & Multi-Functional MMs involve creating cavities inside unit-cells, then placing liquids, powders, and particles inside them, and allowing them to move inside these cavities. This powerful concept aims at multi-disciplinary usage, and allows for design and optimization of several properties and characteristics of the material. For example: we developed materials that become stiffer at higher temperatures, materials that change their mechanical properties when exposed to an external magnetic field. These MMs are adaptable, responsive to external conditions, and dynamically controllable.MM design is based on combining computational multiphysics modeling and Machine Learning. Building the ML-based predictor starts with developing a parametric, high-fidelity computational model, including mesh refinement, sanity checks, and step-by-step validation. This ensures a reliable foundation for the next stages. Physics constraints and manufacturing requirements, such as tolerances and resolutions, ensure the models are robust and practical. Standard ML validation techniques verify accuracy and robustness with test, train, and validation sets, resulting in reliable predictions for FEM modeling.Modern systems face major challenges related to power consumption and heat generation. Common engineering materials often fail to provide adequate solutions, leading to a growing demand for improved heat exchangers, heat sinks, and short-circuiting prevention. We develop MMs to address these challenges. Examples include controlling the coefficient of thermal expansion versus mechanical stiffness, phase-changing elements with controlled conductivity versus temperature, and multi-physics (heat and flow) with triply periodic minimal surface geometry, enabling control of pressure and temperature drops.In mechanics our R&D explores energy absorption through fluid viscosity, center-of-mass dynamics, and the integration of pendulum principles into microstructural unit cells. Fluid Viscosity: We place viscous fluid within the unit cells. Upon impact, the fluid flows and resists motion due to its viscosity, enabling controlled energy dissipation. This mechanism ensures highly effective shock absorption and vibration damping while maintaining structural integrity.Recognizing the immense potential of MMs and the challenges in making them accessible to engineers, we develop a methodology that combines computational physics simulations with ML models. Using this methodology we develop novel dynamic and multi-functional MMs for heat transfer and shock absorption and vibration damping.
BiographyAlessia Perilli holds a Bachelor’s and Master’s degree in Physics from L’Aquila University (Italy) and a PhD in Physics from Sapienza University of Rome, specializing in computational statistical mechanics. She then pursued postdoctoral research at Tel Aviv University, studying the physics of living systems, particularly in plants. Currently, she is an R&D scientist at FVMat in Haifa, where she develops metamaterials with advanced heat transfer properties and leverages machine learning to enhance FEM simulations.
Presented By Mahy Soler (Instituto de AstrofÃsica de Canarias)
Authored By Mahy Soler (Instituto de AstrofÃsica de Canarias)Konstantinos Vogiatzis (Instituto de AstrofAsica de Canarias) Juan Cezar-Castellano (Instituto de AstrofAsica de Canarias) Sergio Bonaque-Gonzalez (Departamento de Fesica, Universidad de La Laguna (ULL)) Marta Belio-Asen (Instituto de AstrofAsica de Canarias) Miguel Nunez (Instituto de AstrofAsica de Canarias) Mary Barreto (Instituto de AstrofAsica de Canarias)
AbstractThe European Solar Telescope (EST) is a next generation 4-m class solar telescope that will be built at the Observatorio del Roque de los Muchachos (ORM). The performance of an optical telescope is evaluated by its seeing, which refers to the image degradation caused by turbulent fluctuations in the air's refractive index as light travels through the optical path. This phenomenon arises from various sources, including atmospheric turbulence, environmental and local effects.Atmospheric turbulence is largely determined by the site location and the ORM is renowned for its excellent atmospheric conditions for astronomical observation. The EST design aims to minimize environmental local turbulence effects caused primarily by the thermal ground layer. This is achieved by placing the optical elements as high as possible from the ground and using an open-air configuration that promotes natural ventilation. The local fluctuations in the air refractive index in the surrounding of the telescope are produced by a combination of thermal and mechanical turbulence that depends on the size and shape of the design. To evaluate the local effects, detailed Finite Element Thermal and Computational Fluid Dynamics (CFD) models were developed. These models accounted for the topography, telescope structure, pier, enclosure and nearby telescopes within the observatory. Transient thermal analysis calculates superficial temperatures which are subsequently used by the CFD model to compute the air temperature distribution and its refractive index. A series of transient CFD analyses is conducted to analyze the impact of environmental conditions, including wind speed, wind direction and telescope orientations, on different design alternatives. These simulations provide further insights into the spatial distributions of air temperature and refractive index fluctuations inside the optical path. The results are postprocessed to derive aero-optical metrics, allowing to estimate the telescope performance. The study highlights how design choices influence aero-optical turbulence and provides feedback for optimizing the EST’s design. The results contribute to the telescope’s error budget by quantifying local turbulence effects and ensuring that the aerodynamic design supports its optical performance goals.
Authored & Presented By Jan Hansen (Technische Universität Graz)
AbstractElectromagnetic theory has become indispensable for electromagnetic compatibility (EMC) optimization of electronic systems. For example, the road approval of electric vehicles is subject to strict electromagnetic emission and immunity limits. Computer simulation helps to understand the system complexity, the subtlety of the electromagnetic coupling paths, and to optimize electronic systems with respect to weight and cost. Using deterministic methods such as network or electromagnetic simulations, we can study realistic models of electronic systems, e.g. traction inverters of electric vehicles with simulations that have a wall-clock time of anything between minutes a several hours per run. This limits the number of samples that we can conveniently study within the industrial design cycle to a range from dozens up to several thousand at max. We show that is it possible to train machine learning models from realistic models of electronic systems which run with wall clock time in the order of milliseconds, independent of the underlying physical model being a circuit or an electromagnetic simulation. The gain in computational speed is at least a factor of 1000, enabling us to run millions of simulations a day. This raises the potential of simulation to a qualitatively new level, the implications of which are not yet fully understood. Using realistic models of electronic systems which are trained over up to 17-dimensional parameter spaces, we show how to solve multi-objective optimization problems of complex EMC scenarios. We discuss the obtained optimal solutions and the implication of these results on the EMC design of electronic systems. We also give an outline of the further potential that trained machine learning models in EMC may offer. In particular we talk about the benefit of uncertainty quantification and its implication of risk analysis in the design for electromagnetic compatibility. Further, the reverse problem, Bayesian inference appears to improve the notoriously bad prediction quality of EMC models. With many parameters not exactly known and hard to acquire, Bayesian inference allows to rigorously estimate the most likely set of model parameters to match to available measurements. We end the talk with a number of open challenges that wait to be solved.
BiographyJan Hansen received the B.Sc. degree in mathematics/physics from Trent University, Peterborough, ON, Canada, in 1995, the Diploma in physics from Freiburg University, Freiburg in Breisgau, Germany, in 1998, and the Ph.D. degree in wireless communications from ETH Zurich, Zurich, Switzerland, in 2003. In 2005, he joined Robert Bosch GmbH, Stuttgart, Germany, to work in electromagnetic compatibility (EMC) simulation. Since 2022, he is Assistant Professor with the Institute of Electronics at Graz University of Technology and staff scientist at Silicon Austria Labs. His primary research interests include the development of EMC simulation methods, electromagnetic modeling, and the application of machine-learning techniques. Jan Hansen leads the Christian Doppler Laboratory for EMC aware robust electronic systems.
16:20
Presented By Alexander Schowtjak (3M Deutschland GmbH)
Authored By Alexander Schowtjak (3M Deutschland GmbH)Markus von Hoegen (3M Deutschland GmbH) Tobias Waffenschmidt (3M Deutschland GmbH)
AbstractAdhesives are commonly used in various engineering applications, providing bonding solutions in industries such as automotive and aerospace. Accurate modeling of adhesives is essential for predicting the performance and reliability of bonded structures. Finite Element Analysis (FEA) has become a valuable tool for simulating the behavior of adhesive joints under different loading conditions. Depending on the aim of the analysis, different types of material models are more or less suitable for different applications. Additionally, access to testing capabilities, costs, and the effort of the calibration process can influence the choice of material model. Common modeling approaches include viscoelastic models, elastic-plastic models, and cohesive zone models, each with different capabilities and limitations.Viscoelastic models can be built based on Dynamic Mechanical Analysis (DMA) data, making them very efficient for the calibration process. These models are used to capture the time-dependent behavior of adhesives and generally include strain-rate and temperature dependency, as well as creep and stress relaxation to some extent. The accuracy of viscoelastic models is typically limited by the degree of deformation.Elastic-plastic models can predict the behavior of adhesives under finite strains, accounting for both elastic (reversible) and plastic (irreversible) deformation. While these models are generally cost-effective and efficient to generate, the experimental effort can increase significantly when incorporating temperature or strain-rate dependency. Although failure cannot be predicted explicitly, these models can provide an indication of failure based on local stresses and strains.Cohesive Zone Models (CZMs) are widely used to simulate the initiation and propagation of cracks in adhesive layers. CZMs represent the adhesive layer as a continuum with a predefined traction-separation law that characterizes the material's response to loading. This approach allows for detailed analysis of fracture processes, including the prediction of crack initiation and growth. Although CZMs offer excellent predictive capabilities, the calibration process requires a range of different experiments. Additionally, incorporating strain-rate and temperature dependency is only supported by a few commercial FEA software packages.In conclusion, the modeling of adhesives within the context of Finite Element Analysis involves several techniques, each suited to different aspects of adhesive behavior. Viscoelastic, elastic-plastic, and cohesive zone models provide comprehensive tools for simulating the performance of adhesive joints. By leveraging these techniques, engineers can enhance the design and reliability of bonded structures across various industries.
Authored & Presented By Mads Nymann (Grundfos)
AbstractThe evolution of regulatory frameworks and efforts to minimize production costs when manufacturing pressurized components frequently necessitate the replacement of traditional materials such as cast iron, brass, and lead- and PFAS-containing materials with composite alternatives. This shift often results in reduced material strength, prompting a required redesign of the pressurized components. Such redesigns typically incorporate reinforcements in the form of ribs or increased material thickness, guided by the design engineer’s intuition. This design process can involve numerous iterations of design and simulation to achieve the necessary structural strength, while not necessarily minimizing material use.This paper introduces a framework for integrating topology optimization into the design workflow, aimed at reducing the lead time associated with simulation and CAD design of pressurized components during redesign by democratizing topology optimization otherwise performed by a simulation specialist. The global pump manufacturer Grundfos has since 2015 been dedicated to a simulation-driven development strategy aiming to implement simulation into all parts of product development. To facilitate broader accessibility of topology optimization, Grundfos has developed an automated topology optimization tool tailored for pressurized parts. This vertical tool empowers design engineers to perform topology optimization independently, without requiring the expertise of a simulation specialist. It features a web-based user interface developed in Python, which facilitates automated simulation software on a controlled virtual machine for users with limited experience.The automated topology optimization process utilizes Catia V5 CAD files prepared by the design engineer with specified parameters and publications. Design exclusion regions encompass the hydraulic surfaces, material thickness, and connections. The permissible design space is contingent upon the selected production technique; for casting processes, design engineers can input allowable design parameters within the confines of the casting tool and designated pull-out directions, while for additive manufacturing, the design space, overhang angle, and print orientation are defined accordingly. The interface allows design engineers to provide inputs such as specific materials, operational pressure, retained mass, and simulation speed, with the latter impacting mesh size and related solver settings.The user submission of a CAD file to the topology optimization tool initiates a linear static structural analysis within Ansys Mechanical, which is the foundation for subsequent topology optimization. Given the inherent time demands of the optimization phase, acceptance criteria for the static structural simulation are embedded in the tool to prevent the progression of setups with errors to the topology optimization stage. After a successful topology optimization, the simulation output is visualized and made available on the web-based user interface for direct inspection.This vertical tool for topology optimization has successfully decreased simulation lead time from performing the topology optimization to the routine maintenance of the simulation tool itself, resulting in simulation lead time being reduced by approximately 95 percent. For design engineers, this democratization of topology optimization provides immediate insights into optimal designs that reduce material consumption while maximizing compliance, thereby significantly reducing the number of iterations necessary for design completion. This approach represents a marked improvement over traditional iterative methods that lack initial design guidance. The increased accessibility and development of specialized tools for complex simulations exemplify Grundfos’ dedication to advancing simulation-driven development throughout all stages of product development.
Presented By Claus Pedersen (Dassault Systemes Deutschland)
Authored By Claus Pedersen (Dassault Systemes Deutschland)Wouter Van Der Velden (Dassault Systemes)
AbstractRenewable energy, particularly wind power, has become one of the fastest-growing and most affordable energy sources globally. To meet climate goals, expanding wind energy infrastructure requires extensive land use, which has driven more stringent regulatory standards to manage environmental and noise impacts. As wind turbines proliferate, concerns over their acoustic footprint, especially in densely populated and environmentally sensitive areas, have intensified, both aero-acoustic and vibro-acoustic modeling becomes essential to predict and mitigate wind turbine noise.This study introduces a multi-fidelity framework for aero-acoustic simulation of wind turbines. The framework combines semi-analytical aerodynamic/aero-acoustic calculations with detailed Computational Fluid Dynamics (CFD) simulations, leveraging both two-dimensional blade profile analysis and full three-dimensional turbine models. The framework is initially validated using full-scale field test data, which serves as a baseline for testing methodology accuracy and reliability. The focus then shifts to assessing noise mitigation potential through the analysis and optimization of serrated trailing edges on wind turbine blades. By analyzing blade sections in both quasi-two-dimensional wind tunnel and full three-dimensional conditions, it is found that serrated trailing edges effectively reduce airfoil trailing-edge noise. As aero-acoustic levels decrease significantly through advanced simulation methods, previously masked tonal noise issues emerge as prominent challenges. These distinct, narrow-band sounds are particularly intrusive and can disrupt nearby communities, prompting stricter regulations to protect public health and environmental quality. A common source of tonal noise is transmission error from gear meshing within the drivetrain, generating vibrations that travel along load-dependent paths to radiating surfaces—typically the tower—before propagating through the air to microphones at ground level. This study introduces a high-fidelity, multi-disciplinary simulation workflow to enhance understanding and mitigation of these vibro-acoustic challenges.Given the complexity of wind turbine noise management, the collaboration between original equipment manufacturers (OEMs) and suppliers is crucial. This multi-disciplinary approach to system modeling, with new functionalities enabling the integration of IP-protected subsystems, facilitates a holistic approach to noise reduction while maintaining proprietary technology protections.In conclusion, the proposed multi-fidelity framework offers a robust tool for predicting and mitigating wind turbine noise. By combining aerodynamic, aero-acoustic, multibody and vibro-acoustic simulations, this approach addresses both broadband and tonal noise issues, providing a pathway toward quieter, more environmentally compatible wind turbine designs.
Authored & Presented By Jesus Garcia (Transvalor)
AbstractIn electromagnetic computational modeling, accurately predicting complex physical phenomena is crucial for understanding their industrial applications. In solid-state manufacturing, techniques like Induction Heating (IH) are widely used, for instance, for preheating in forging processes and enhancing mechanical properties through heat treatment [1]. Similarly, Magnetic Pulse Forming (MPF) enables the precise fabrication of complex parts and facilitates welding dissimilar materials without mechanical contact [2]. For processes involving liquid states or liquid-to-solid transitions, Electromagnetic Stirring (EMS) plays an important role, especially in continuous metal casting applications [3]. The Finite Element Method (FEM) has become a powerful tool for simulating such industrial problems, as it accommodates complex geometries and the coupling of diverse physical phenomena. However, achieving high accuracy in FEM simulations often requires fine domain discretization, which can lead to substantial computational costs.To address this challenge, adaptive remeshing techniques have been developed to optimize mesh distribution while reducing computational effort. This study proposes a methodology for anisotropic adaptive mesh refinement tailored to electromagnetic problems, leveraging the physics underlying the phenomena. The process begins with the definition of an a posteriori error estimator, which identifies regions in the computational domain that require enhanced resolution [4]. Then, a metric tensor is calculated to capture the anisotropy inherent in the electromagnetic phenomena [5]. Using this information, an automatic remeshing procedure dynamically adjusts the mesh size and shape, refining areas where the physical effects are most significant.This methodology ensures that computational resources are concentrated on critical regions, leading to more efficient and accurate simulations. This approach significantly reduces computational costs and CPU time, thereby enabling enhanced simulation of the industrial applications.Examples demonstrating the performance of the method will be presented, including an industrial case featuring full immersion of the inductors.References[1] V. Rudnev, D. Loveless, and R. L. Cook, Handbook of Induction Heating. CRC Press, 2017. doi: 10.1201/9781315117485.[2] J. R. Alves Zapata, “Magnetic pulse forming processes: Computational modelling and experimental validation,” Université de recherche Paris Sciences et Lettres, 2016.[3] U. Müller and L. Bühler, “Magnetofluiddynamics in Channels and Containers,” in Magnetofluiddynamics in Channels and Containers, Springer Berlin Heidelberg, 2001, pp. 1–7. doi: 10.1007/978-3-662-04405-6_1.[4] J. O. Garcia C, J. R. Alves Z, J. Barlier, and F. Bay, “A-Posteriori Error Estimator for Finite Element Simulation of Electromagnetic Material Processing,” IEEE Trans Magn, 2022, doi: 10.1109/TMAG.2022.3212597.[5] B. F. Garcia C. Jesus O., Alves Z. José R, Ripert Ugo, Barlier Julien, “Anisotropic Mesh Adaptation Based on Error Estimation for 3D Finite Element Simulation of Electromagnetic Material Processing,” IEEE Trans Magn, 2023,
Presented By Gabriel Curtosi (SEAT)
Authored By Gabriel Curtosi (SEAT)Fabiola Cavaliere (SEAT S.A.)
AbstractCrash simulations are inherently complex and involve significant uncertainty due to the dynamic and nonlinear behavior of materials during impact events. These uncertainties arise from factors such as contact mechanics, friction, variability in impact conditions, and sensitivity to initial conditions. Even small variations introduced during manufacturing can trigger unexpected structural responses, often leading to costly and time-consuming modifications shortly before the start of production (SOP). Addressing these challenges efficiently is critical to ensuring the safety, reliability, and robustness of vehicle designs.Traditionally, quantifying such uncertainties requires a large number of standard simulations to generate sufficient data for accurate analysis. However, this approach is both computationally expensive and time-intensive, posing significant challenges for meeting tight automotive development schedules and cost constraints.To address these limitations, SEAT has developed AQUA (Artificial Quantification for Uncertainty Anomalies), an innovative tool that uses advanced machine learning techniques to revolutionize uncertainty quantification in crash simulations. AQUA leverages data from VPS/Pamcrash simulations to train machine learning models that can accurately predict a broad range of structural behaviors. Unlike conventional methods, AQUA achieves this using a limited number of virtual tests, significantly reducing the computational resources and time required for analysis.This advanced methodology allows engineers to evaluate uncertainties under a variety of conditions, assess the robustness of designs, and proactively identify potential structural weaknesses before production begins. By enabling comprehensive and reliable analysis without the need for physical prototypes, AQUA supports a zero-prototypes goal, eliminating costly trial-and-error iterations. This not only ensures efficiency but also shortens development timelines, reduces expenses, and supports a more sustainable approach to vehicle design.AQUA represents a transformative breakthrough in crash simulation techniques. By combining the predictive power of machine learning with traditional simulation processes, it enhances accuracy, efficiency, and reliability. This groundbreaking tool empowers engineers to develop safer, more robust, and cost-effective automotive products, setting a new standard for innovation in the industry.
Authored & Presented By Mahesh Gupta (Kennesaw State University)
AbstractThe main objective in design of a polymer extrusion die is to develop a die channel geometry which gives a uniform velocity distribution at the die exit. The uniform velocity at the die exit is required to minimize the extrudate distortion after the polymer exits the die. In the past, most extrusion dies were designed by a trial-and-error approach using the experience of the die designer. With the development of computationally efficient three-dimensional finite element flow simulation software for extrusion die design, designers can now virtually fine-tune their extrusion die before the die is machined. This can reduce the development time for extrusion dies by 40- 50%. However, virtual fine tuning of extrusion dies still requires the die designers to modify the die geometry themselves after each flow simulation using their experience. To eliminate this need for the die designer to modify the geometry after each flow simulation, in the present work, a die optimization software, optiXtrue, is used in the present work to automatically improve the geometry of a profile die after each flow simulation. The software then simulates the flow again, in the improved die geometry. This cycle of geometry improvement followed by a flow simulation is repeated till a geometry with a uniform exit velocity distribution is obtained in this automatic die optimization process. Besides eliminating the need for designer intervention for die improvement after each flow simulation, this extrusion die optimization software further reduces the development time for extrusion dies, and also provides a better die geometry than the geometries obtain by trial-and-error or by virtual finetuning.For a complex extruded profile shape, the initial die geometry developed by an experienced die designer, as well as the die geometry optimized by the optimization software were machined to validate the predictions from the software. The initial die geometry had a large variation in velocity at the die exit, whereas the die geometry optimized by the optimization software resulted in a uniform exit velocity distribution. The predicted exit velocity distribution in the initial as well as optimized die geometry matched accurately with the velocity distribution at the die exit in the experiments.
Presented By Maciej Majerczak (Valeo)
Authored By Maciej Majerczak (Valeo)Ewelina Czerlunczakiewicz (Valeo)
AbstractThe modern development of automotive products demands increasingly faster product development and design validation, reduced expensive physical testing, and a greater reliance on digital prototypes. Finite Element Analysis (FEA) has become an essential and powerful tool for predicting product behavior and validating designs.Automotive systems are subject to various types of mechanical loading, including random excitations that represent vibrations experienced throughout a product's lifetime. Linear FEA methods in the frequency domain are widely used in the industry to predict this behavior due to their efficiency. However, for certain complex Valeo products, a nonlinear time-domain approach offers higher accuracy by providing a more precise representation of product behavior, enabling improved predictions of failure modes, stress levels, and fatigue life. The main challenge in implementing nonlinear dynamic methods lies in the time-consuming preprocessing, post-processing, and significantly longer computation times. To address this issue Valeo has developed an automated process that allows it to convert models from a standard linear to nonlinear one, making nonlinear dynamic FEA feasible for large industrial FE models. The conversion process involves creating nonlinear contacts with the correct properties, changing the solver, implementing result post-processing, and automating the detection of common errors. Additionally, the objective is to maintain the current standard simulation workflow as closely as possible, enabling engineers to respond quickly to new design iterations and maintain a competitive edge through innovative solutions.The automated process for nonlinear dynamic FEA is demonstrated in a real case study involving a car Front Cooling Module (FCM) subjected to random vibration loadings. The FCM is a multi-component assembly comprising various types of heat exchangers. Historically, it was primarily responsible for cooling internal combustion engines in vehicles. However, with the electric vehicle revolution, the FCM has undergone a significant transformation. These changes have increased the need for detailed consideration of contact nonlinearities in acceptable simulation time.
Presented By Danijel Obadic (Siemens Mobility Austria)
Authored By Danijel Obadic (Siemens Mobility Austria)Bernhard Girstmair (Siemens Mobility Austria GmbH)
AbstractRailway vehicles are fascinating machines, running for immense timespans and distances delivering continuous performance, with lifespans of up to 40 years and typically running around 1.000 km daily. Keeping performance over such lifespans makes them engineering masterpieces.The most stressed and safety critical part of railway vehicles are the bogies, constituting the physical connection between carbody and rail. All forces between the vehicle and the ground are transmitted via the bogie.This must be accounted for during the engineering phase of the bogie to guarantee the desired lifespan, putting stringent requirements on the engineering processes regarding reliability and stability.This leads to a strong emphasis of virtual product development methods in railway bogie design. Almost all the design and optimization is done using simulation models. The first bogie that is built is the first product from serial production, there is no intermediate prototype which is optimized physically.The validation of the system requirements, including validation of homologation related requirements, is performed using one of the bogies from serial production. This validation - simply put, physical measurements showing requirement conformity – constitutes the top level in the V-model shaped design process. The validation of the topmost layer of the assessment quantities of vehicle dynamics in the V-model is always performed to check system requirement conformity. Nevertheless, assessment quantities with a high impact on the overall design process and engineering iterations, such as spring travel, play a minor role in the validation procedure.Using operational measurements, an MBS (multi-body system) model of a Vectron locomotive was validated with respect to the primary spring travel occurring during operation. Primary spring travel is derived from top-level system requirements during the design process. It has great influence on the derivation of subsequent requirements, like requirements for primary springs. The validation was performed on several levels, starting from simple, quasistatic spring travel resulting from constant traction forces, ranging to validation of dynamic amplitudes up to a check of the frequency content of the spring travel signal.The models’ predictions of the primary spring travel were found to be very accurate. They could be further improved by adapting some detailed component parameters, which are of secondary importance on the first glance. The work highlights the importance of comprehensive model validation of MBS Models to improve the predictive aspects in design processes of future bogies.
Authored By Ceyhun Sahin (Noesis Solutions)Dr. Simone Ficini (Noesis Solutions) Ceyhun Sahin (Noesis Solutions) Maximilien Landrain (Noesis Solutions) Thibault Jacobs (Noesis Solutions)
AbstractCrash analyses are always amongst the top priorities for improving vehicle safety and performance in the automotive industry. The evolution in the drive train technologies will not make these crash analyses obsolete any time soon as the safety of the battery pack introduced new challenges to the manufacturers. The historically existing crash analysis on the other hand have always the potential to be upcycled to gain knowledge before the design of battery packs in Electric Vehicle (EV) chassis. In this study, we are integrating AI-based surrogate models, and Multi-Fidelity Efficient Global Optimization (MF-EGO) algorithms within a cloud-native platform to streamline the crash-box design process, to achieve faster design cycles, reduced computational costs, and enhanced collaboration across organizational boundaries. In this work, we use a crash-box with parametrized material thicknesses in a full car simulation under different impact speeds. These high-fidelity transient simulations, run in Open Radioss. A frontloaded database of these simulations also serves as training dataset for a surrogate model. These surrogate models significantly reduce the computational load while preserving accuracy, enabling faster evaluations within optimization cycles.The parametric nature of the problem allows usage of Optimus for optimization of crash-box designs. Demanding optimization problems may end up in an excessive number of high-fidelity simulations for which the time and resources cannot always be justified. To address this trade-off between simulation accuracy and computational efficiency, the MF-EGO algorithm is used to orchestrate the usage of low-fidelity calculations like coarser mesh or surrogate models together with high-fidelity simulations for optimization process. In this specific problem the surrogate models generated by nvision are used for the low-fidelity calculations and the high-fidelity simulations will always be run on Open Radioss. The MF-EGO algorithm leverages these different fidelities in optimizing a design or system more efficiently than using only the highest fidelity level, which is typically the most computationally expensive.Central to this framework is a cloud-native collaborative engineering platform, id8, that serves as the backbone for integration, execution, and democratization of the workflow. This enables seamless orchestration of simulations and optimization tasks, leveraging cloud resources for scalability and cross-geographical collaboration. With its automated interface, users can define and execute complex engineering workflows through a single-click operation, thereby reducing process’ complexity.In conclusion, this study showcases how integrating MF-EGO algorithms with embedded AI-driven surrogate models in a cloud-native environment can enhance the crash-box design process in the automotive industry. This not only improves efficiency and accuracy but also promotes accessibility and collaboration, setting a benchmark for future applications in engineering design.
Presented By Vincent Suske (Dynamore)
Authored By Vincent Suske (Dynamore)Aleksander Bach (Ford-Werke GmbH) Jonas Rohrbach (RWTH Aachen) Andre Haufe (DYNAmore GmbH An Ansys Company)
AbstractThe integration of thin-walled structures with geometrically optimized lattice-type configurations has emerged as a promising approach for the development of crash management systems within the automotive industry. It is anticipated that these structures, when combined with advanced additive manufacturing techniques such as selective laser sintering (SLS), will exhibit superior crashworthiness, which may result in lighter and more efficient vehicle designs. This is particularly advantageous for low-volume vehicles, where the benefits of reduced weight and enhanced safety could facilitate broader adoption of this technology. This concept is currently being actively pursued in the public funded research and development project KI-LaSt*. The initial phase of this project involves the identification and development of suitable alloy candidates that can be produced using the SLS process while exhibiting the necessary ductile fracture behavior. This step is of great consequence, as it ensures that the materials used are capable of withstanding the demands of crash scenarios without compromising structural integrity. The present contribution outlines the various stages of alloy development, emphasizing rapid and straightforward testing methods that follow the inherently time-consuming production process.A substantial part of this research is dedicated to the comprehensive material characterization and identification of material parameters for the selected alloy, with an additional goal of calibrating an LS-DYNA material card. This requires a detailed analysis to comprehend the material's behavior under diverse conditions, thereby ensuring its compliance with the requisite performance standards. Moreover, the study includes structural simulations that serve to validate the calibrated material card. These simulations are indispensable for predicting the material's performance in actual crash scenarios, thus providing a robust foundation for further development and optimization. In general, the combination of thin-walled structures with optimized lattice configurations and the application of advanced manufacturing techniques, such as selective laser sintering, represents a promising advancement in automotive crash management systems. It is anticipated that the ongoing research and development efforts in the KI-LaSt project will facilitate the creation of innovative solutions that enhance vehicle safety and efficiency, particularly in low-volume production contexts.* Funds for KI-LaSt were provided by the Federal Ministry for Economic Affairs and Climate action (BMWK) due to an enactment of the German Bundestag under Grant No. 19I21036D.
Presented By Fan Yang (European XFEL GmbH)
Authored By Fan Yang (European XFEL GmbH)Sebastian Goede (European XFEL) Daniele La Civita (European XFEL)
AbstractThe European X-ray Free Electron Laser (European XFEL) is a research facility providing a 4th-generation light source. It generates ultra-intense, ultrashort X-ray flashes of 150 femtoseconds with MHz repetition rate. This world’s largest X-ray laser is opening up completely new research opportunities for scientists and industrial users across disciplines, such as mapping atomic details of viruses, filming chemical reactions, and studying processes in the interior of planets. As the X-ray beam is transported through a 1.2 km-long tunnel from the undulators to the experimental hall, it interacts directly with numerous beamline components, which range from solid materials to liquid and gas media. It is a significant challenge for the engineering design of the beamline instrumentation, which is required to be able to sustain extremely high heat loads while transporting the highly coherent X-ray laser beam to the experimental hall. Consequently, numerical simulation modelling plays an important role in the system engineering process. In this contribution, a comprehensive multi-physics Finite Element simulation for a beam shutter is presented, including physical phenomena such as heat transfer, structural deformation, gas flow, and phase transitions. Since a beam shutter is a personal interlock component with the highest safety requirements in the radiation protection scheme, it is critical to determine the damage thresholds of such components. Various beam shutter materials, such as boron carbide, silicon, and CVD diamond have been studied in simulations and validated against experimental results. Furthermore, a Computational Fluid Dynamics (CFD) simulation is presented, focusing on characterization of nozzles for the liquid sheet jet sample delivery system. The nozzle geometry and fluid parameters have been studied parametrically to understand how these boundary conditions influence the flow performance of the liquid jet.High-fidelity simulations are often very time-consuming and computationally expensive. Recent advancements in numerical methods, such as Reduced Order Modelling (ROM), AI-empowered data-driven simulation algorithms, and simulation-based generative design enhanced by machine learning, have significantly reduced complexity of the simulation tasks. These advancements provide engineers with a deeper understanding of simulation data correlations, accelerate the iterative instrumentation design process, and therefore ensure safe, robust, and efficient facility operations at European XFEL.
Biography2001-2003, Master study at University of Stuttgart. computational mechanics and material science 2003-2008. scientific assistant at Institute of Mechanics, Helmut-Schmidt University, Hamburg 2008-current, European XFEL, senior simulation engineer.
Authored & Presented By Shruthi G S (TE Connectivity India)
AbstractSignal integrity analysis signifies the ability of the signal to propagate without any distortions from driver to receiver between any of the components inside the PCB. Power Integrity analysis is the process of ensuring a stable and noise-free power delivery network to all the components that help in analyzing the voltage fluctuations, current distribution, and impedance of the power and ground planes in the PCBs.Performing SI Analysis helps to understand the performance of the electronics product through transmission signals.Performing PI Analysis helps to mitigate the voltage drop, and current distribution and ensures the low impedance paths between power and ground planes.The increasing complexity and speed of modern electronic systems, especially in the aerospace and defense sectors, demand a holistic approach to PCB design. Ensuring Signal Integrity (SI) and Power Integrity (PI) is essential for reliable high-speed signal transmission and stable power delivery network in harsh environments. Even minor signal degradation or power disruption can lead to catastrophic consequences.System-level SI-PI co-simulation addresses the critical interdependencies between signal and power behavior in PCBs. Power fluctuations can degrade signal quality, while signal transitions can induce power noise. By integrating electromagnetic simulation for SI with PI modeling, engineers can predict and mitigate risks such as voltage drops, ground bounce, power-induced crosstalk, and noise coupling in real-time.Meeting industry standards for SI and PI, especially in aerospace, defense, and automotive sectors, is easier with co-simulation, as it provides a thorough validation of both signal and power paths.This paper demonstrates the use of Ansys software for unified SI-PI co-simulation, enabling accurate insights into real-world PCB performance. Practical strategies, such as PDN optimization, decoupling, and layout improvements, help resolve issues like EMI, signal distortion, and reflections. Co-simulation minimizes iterative design cycles, reduces costs, ensures compliance with industry standards, and optimizes PDN design for efficient power delivery. It ensures robust performance in multilayer PCBs and advanced technologies like DDR memory and RF systems, where tight SI-PI integration is critical for next-generation electronic systems.
DescriptionThis session would be organized and executed by members of the NAFEMS Simulation Governance and Management Working Group. The main focus will be a presentation on the upcoming Guidelines for Validation of Engineering Simulation which has just been approved for publication & presented by one or more of the authors (Ola Widlund, Jean Francios Imbert, and Alexander Karl are all working to attend). This publication defines an expanded means of validation, moving beyond dedicated high quality experiments commonly referenced in existing standards. A spectrum of validation methods is presented, categorizing them from the strict definition of Validation per ASME VVUQ standards through weaker validation approaches such as expert review. Attributes of validation rigor are defined, to aid in assessing the credibility of simulations. The goal is to provide greater flexibility to industrial users of simulations, allowing them to select a level of validation rigor consistent with the applications and risks associated with their simulations, as not all users need the highest levels of rigor associated with the aerospace or medical device industries.The remainder of the session would consist of one or more of the following: • Papers on validation drawn from the abstracts submitted for the congress • Presentations from SGMWG members on the topic of validation • Possibly a question and answer session with the authors and SGMWG members.This will be determined once the SGMWG has had the opportunity to review submitted abstracts for presentations that would complement the theme as well as once the authors and SGMWG members are able to confirm travel authorization with their respective employers.The SGMWG is aware of several tentative abstracts on strategies for obtaining certification by simulation. Depending on how many of these come to fruition, this could be another session topic, or if only one or two abstracts are submitted they could be used to fill out this session.
AbstractComputer-Aided Engineering (CAE) relies on physics-based computational models to perform analysis tasks of industrial products at reduced cost and time-to-market. The possibility to simulate the behaviour of different design variants, with limited resort to physical prototyping and testing, facilitates the achievement of quality and sustainability targets while increasing profit margins. However, the positive effects of driving product development and manufacturing via CAE depend essentially on the predictive capability of the computational models used in the simulations.Potential sources of uncertainty for the results of numerical simulations include all the intrinsic elements of the model building and analysis processes, such as modelling assumptions, variability of physical properties, measurement uncertainty, and numerical errors. Furthermore, human errors in the use of the models as well as in the way the models are managed during the whole lifecycle might make simulation outcomes deviate from reality. Simulation Governance is the process to ensure that the predictive capability of numerical simulations is adequate for their intended use (essentially, a Quality Assurance function tailored to CAE). Activities such as model verification, validation, and uncertainty quantification fall within the scope of Simulation Governance. The systematic, large-scale implementation of Simulation Governance is often hindered by the lack of dedicated resources resulting often from the lack of guidance on how to translate key concepts and methods from the scientific context where they originate to the context of industrial research and development. In this contribution, we will present reflections and outcomes from the recently ended research project TRUSTIT, which targeted the problem of integrating uncertainty quantification and sensitivity analysis into industrial CAE workflows, namely in complete vehicle simulations performed at Volvo Cars with the in-house developed tool VSIM. The key requirements to establish a complete simulation credibility assessment framework are identified, together with the implications for simulation-driven design and virtual testing. The practical challenges to integrate uncertainty quantification in existing simulation platforms will be illustrated with an example based on the virtual representation of the coastdown test, which is designed to determine the aerodynamic and mechanical resistive forces acting on vehicles.
Presented By Andreas Spille (CFX Berlin Software)
Authored By Andreas Spille (CFX Berlin Software)Jan Hesse (CFX Berlin Software GmbH) Dan Davey (Rotor Design Solutions Ltd)
AbstractPositive displacement machines may be used as compressors (e.g. scroll, roots, screw, vane) or pumps (e.g. lobe, internal and external gear, screw, eccentric screw, gerotor). A typical positive displacement machine has a minimum of two rotors enclosed within a solid casing. The geometry of the rotor sections is a specialized gear profile with lobes of the first rotor interlocking into the flutes of the second rotor. To displace a fluid the rotors are rotated, which cause pockets of the fluid to travel around the outside of the rotors, so that the fluid is transferred from the inlet end to the discharge end of the machine. As the rotors in the machine are moving dynamically, often at high RPM, it is essential that these chambers are separated by small gaps (radial, axial, and interlobe gaps). Clearly, the shape and the size of the chambers and the gaps play a crucial role in the performance of the machine. Poor rotor geometry design with excessively large gaps causes an inefficient machine and unnecessary leakage, gaps that are too small risk contact damage and ultimately machine failure. Therefore, efficient positive displacement rotor design is important.The design of the rotor tooth shapes is a complex process. This process is constantly evolving with new design possibilities such as variable helix twist offering a step change in rotor efficiency. Rotor Design Solutions Ltd (RDS) offers a software package that designs the rotor profiles for multiple positive displacement machine types. The main design criterion for the rotor tooth profile is leak minimization rather than power transmission associated with standard gear design. The RDS software allows a rotor tooth to be built up in small sections of geometrical functions such as radii, parabolas, hyperbolas, trochoids and ellipses. These sections connect together to produce a continuous smooth rotor geometry. This geometry may then be analysed to measure and compare the size of the leakage paths so an optimized shape may be developed. Finally, precise clearance gaps may be added to the rotor profile to avoid unwanted contact.However, manufacturing and experimentally testing these new rotor geometries is time-consuming and expensive, and the optimum clearances required for efficient and reliable operation are difficult to predict. Therefore, 3D CFD simulation is often used to calculate the performance of new rotor designs and examine the flow behaviour, the rotor dynamic deformation and the thermal deformation in detail. This allows the clearance pattern between the rotors to be varied and tested with respect to build-up of pressure, leakage paths, and overall machine efficiency. This presentation shows how the rotor profiles are designed in RDS software, exported to TwinMesh which creates the meshes and simulation setup, and how the CFD simulation with Siemens Simcenter STAR-CCM+ is used for the design evaluation. The overall process is simplified by an automated data exchange between RDS software and TwinMesh, and by an automated pre-processing and post-processing for the CFD solver delivered by TwinMesh. This semi-automated process confirms the best design by comparing the CFD simulations. The manufacture of these CFD-proven rotors may then be progressed to experimental validation with confidence minimizing development cost and time.
Biography* Diploma and PhD in physics on stability analysis of plane Couette flow * PostDoc at TU Berlin on optimal control and generation of turbulent inflow data for LES * Since 2001 responsible for research & development at CFX Berlin Software GmbH * Research projects on simulation of gas metal arc welding, aeroacoustics, electroplating, electrochemical machining, pyrolysis in room fires, explosive’s vapour detection, and others * Focus is on implementation, verification, and validation of physical and numerical models in CFD * Since 2019 managing director of CFX Berlin
Presented By Wolfgang Witteveen (FH OÖ – Forschungs- und Entwicklungs)
Authored By Wolfgang Witteveen (FH OÖ – Forschungs- und Entwicklungs)Lukas Koller (FH OÃ – Forschungs und Entwicklungs)
AbstractAn interference fit is a common joining technology used to connect a shaft and a hub. In the case of dynamic loads, characteristic variables such as contact pressure or safety against slip are load and state dependent. Such effects could either not be investigated using common simulation methods or just in a highly simplified manner. The reason for this is the non-linear contact situation between shaft and hub, which made a finite element simulation with fine meshing and the consideration of all dynamic effects impossible. In this work so-called "contact modes" are applied to interference fits. Contact modes are trial vectors for describing the deformations within a contact area, leading to a dramatic reduction of the number of degrees of freedom, and thus the computing time, without any significant loss of accuracy. This method closes the former mentioned blank spot on the simulation map because they allow non-linear, accurate and fast numerical time integration of fine meshed finite element models of interference fits without simplifications regarding the dynamics. In the latest versions of ABAQUS (FEM) and SIMPACK (MBS), the automated computation of contact modes, their import into the multi-body simulation and the definition of the contact force elements have been implemented in a user-friendly way. The accuracy and efficiency of this new method are demonstrated by static and dynamic load cases. For the static load cases, the resulting stresses inside an interference fit, computed with the multi-body simulation software SIMPACK, are compared to those, computed with the nonlinear Finite Element code ABAQUS. With fine meshes and dynamic loads, a direct comparison of the MBS result with an FEM computation is no longer possible. For this reason, convergence analyses are presented for dynamic loading. The stresses within the contact area converge with an increasing number of contact modes. The CPU times are very short for all computations.
Authored By Morgan Jenkins (Secondmind)Victor Picheny (Secondmind) Hrvoje Stokic (Secondmind Ltd) Qi Qi (Secondmind Ltd)
AbstractSet-based design methodologies have demonstrated a significant promise in offering systems engineers a broader range of choices earlier in the design process than conventional point-based design. This approach allows engineers to explore multiple alternatives simultaneously, making it particularly advantageous in dynamic and rapidly evolving fields. However, the dynamics in automotive design are evolving at an unprecedented pace. The myriad market and regulatory factors contribute to increased levels of complexity that threaten to overwhelm traditional time- and resource-intensive SBD processes. Challenges include integrating new technologies, meeting stringent emissions standards, and accommodating diverse consumer preferences, which collectively strain conventional design frameworks.In this talk, we will explore how advanced, data-efficient Machine Learning techniques can deliver a step change in system design optimization, pushing set-based design to new and essential levels of efficiency and impact. These ML techniques facilitate more comprehensive analysis through a highly data efficient approach, to inform decision-making, thus overcoming limitations of traditional methods. Through practical examples of real-world design challenges, we will demonstrate how ML helps engineers identify and rapidly evaluate better choices across vastly more complex design spaces.Additionally, we will discuss how these techniques can reduce dependencies on expensive simulations by predicting outcomes more accurately with fewer computational resources. This ability minimises schedule risks and cost overruns by adaptively learning and refining processes as new data becomes available. In particular, we will illustrate how set-based design frameworks can be used to decouple requirements across a multi-dimensional parameter space, enabling simultaneous consideration of multiple disciplines and teams. This aspect highlights its power in collaborative environments where interdisciplinary integration is crucial.Attendees will leave this talk with a deeper understanding of how augmenting set-based design with Machine Learning can significantly enhance its capabilities, particularly for complex products or systems involving multifaceted trade-offs. This approach not only augments human expertise and improves efficiency, but also uncovers novel, data-driven insights, making set-based design a more robust and powerful tool in the engineer’s toolbox. By embracing these innovations, organisations can drive forward more effective, sustainable, and innovative automotive design processes.
Presented By Landong Martua (Engineering Cluster, Singapore Institute of Technology)
Authored By Landong Martua (Engineering Cluster, Singapore Institute of Technology)Kiew Choon Meng (Engineering Cluster, Singapore Institute of Technology, Singapore) Steven Tay Nguan Hwee (Engineering Cluster, Singapore Institute of Technology, Singapore) Gan Hiong Yap (Engineering Cluster, Singapore Institute of Technology, Singapore)
AbstractRecent advancements in triply periodic minimal surfaces (TPMS) have prompted a surge in their application as innovative solutions for efficient heat removal in compact systems. TPMS-based heat exchangers exhibit significant promise for cooling applications due to their high surface area, tunable porosity, and optimized fluid flow characteristics. However, there remains a critical gap in the literature concerning optimizing TPMS design variables to enhance thermal performance and manage pressure, particularly in the context of air-cooled heat exchangers.The present study investigates the thermal and fluid dynamic performance of an air-cooled heat exchanger employing a sheet Diamond TPMS structure. A conjugate heat transfer analysis was conducted using computational fluid dynamics (CFD) via ANSYS Fluent to evaluate the heat transfer and fluid flow characteristics across various configurations. A simplified model was developed in nTop, focusing on a designated design space of 30 mm x 30 mm x 30 mm to facilitate streamlined evaluation.The Design of Experiments (DoE) methodology was employed to explore key design variables systematically. Critical factors under investigation included TPMS unit cell size, wall thickness, and material selection (Aluminum and Copper). The impact of these variables on essential performance metrics—such as heat transfer rate, cold air flow rate, and hot liquid pressure drop—was analyzed to identify optimal configurations that maximize thermal efficiency while minimizing pressure losses.The findings reveal that increasing the unit cell sizes and decreasing wall thickness—thereby enhancing porosity—significantly improve thermal performance, including heat transfer rates and fluid flow characteristics, while effectively reducing pressure drop. Conversely, the choice of material (Aluminum versus Copper) exhibited a relatively minor influence on heat transfer outcomes, indicating that geometric configuration and porosity are more predominant factors in this application.This study establishes the feasibility of incorporating TPMS structures into heat exchanger designs and offers a systematic approach for identifying optimal configurations by integrating DoE and CFD analysis. By elucidating the most influential design parameters, this research advances effective and efficient heat exchangers, thereby facilitating improved cooling performance in compact devices within various industrial applications.
Authored By Moncef Salmi (Hexagon)Srikanth Derebail Muralidhar (Hexagon MI)
AbstractObtaining high-quality material data for short-fiber reinforced plastics (SFRP) is a critical yet challenging requirement in material engineering. The mechanical performance of SFRP materials—commonly used in industries for their lightweight and strength properties—is highly influenced by factors such as fiber orientation, temperature, and loading conditions. Traditional experimental methods for capturing complex properties like static stress-strain behavior and creep response require significant time, budget, and specialized resources. The delays and high costs associated with gathering this data can be prohibitive, often exceeding the timelines of design projects and limiting the ability to conduct rapid virtual tests, essential for optimizing product performance while reducing environmental impact. The presented work demonstrates a new approach based on pretrained AI models, combined with transfer learning technique, that offers an innovative solution to these challenges. This approach leverages the ability of neural networks that are pretrained on a larger dataset to readapt to a related smaller dataset. The pretrained AI models are created using large experimental material database combined with advanced material modelling simulation data. This allows them to predict complex behaviors such as stress-strain and creep responses, across different dimensions such as temperatures, fiber orientations, and loading conditions. The pretrained AI models are then customized to a new material system with minimal input data using transfer learning. Such approach is particularly beneficial for generating accurate material data where very limited input data is available. The solution demonstrates significant advancements in material data enrichment by simulating the static stress-strain curves and creep performance of SFRP materials under varied conditions. The proposed approach ensures high data fidelity by focusing on limited but crucial input data, which allows for fast, efficient generation of complete datasets without sacrificing accuracy. Experimental validation has been conducted, with results showing strong alignment between AI-generated predictions and actual measured data for both static and creep behaviors. This proves the reliability of the workflow and highlights its value in reducing dependency on extensive physical testing. Moreover, the method provides practical guidance on selecting the minimal set of input data necessary to enrich datasets accurately. For example, stress-strain curves and creep data are enriched across multiple factors—such as temperature, loading magnitude, and fiber orientation—allowing engineers to comprehensively model material behavior under real-world conditions with just a few core data points. This work offers a promising alternative to traditional data acquisition for SFRP materials, presenting an efficient pathway to accurate material data generation.
Authored & Presented By Volker Gravemeier (AdCo Engineering GW GmbH)
AbstractTribological systems represent one of the most challenging multiphysics problems, involving various coupled physical fields and occurring, e.g., in mechanical, automotive and production engineering as well as medical technology. Examples for such systems are bearings, gears, seals, valves, and synovial joints. They typically feature the interaction of contacting surfaces separated by a thin fluid film. According to a recent study, about 23% (119 EJ) of the world’s total energy consumption originate from such systems. In this presentation, a comprehensive multiphysics computational method for thermal elastohydrodynamic lubrication (TEHL) and results obtained from applying it to tribological systems will be presented As a matter of fact, taking their multiphysical nature into account when simulating such systems is inevitable in most of the cases for truly reflecting their real-world features. For this purpose, it is typically both mandatory and challenging to consider all (nonlinear) effects of the individual physical fields as well as their mutual interactions. Only this way, though, it is ensured that one obtains reliable simulation results eventually. This is particularly true as soon as one approaches, for instance, the threshold range for dimensioning technical systems such as those prevalent in tribology. Among other things, the proposed new method overcomes typical limitations frequently reported in the literature for existing simulation methods for TEHL, such as (i) restrictions to reduced-dimensional or static/steady-state problems, respectively, (ii) the availability of merely rather simple material laws (e.g., linear elasticity), (iii) the inevitable avoidance of contact scenarios within boundary and mixed lubrication regimes at all or the use of simplified modeling assumptions (e.g., elastic half-spaces), respectively, (iv) limitations on “code couplings” using commercial or open-source CAE software packages as “black-box” components, and (v) restrictions to node matching or accuracy-reducing interpolation procedures, respectively, at domain interfaces, to name a few. In contrast, the presented advanced computational method, available within a singular CAE software based on finite element formulations, enables predictive, fully-coupled and detailed 3-D resolved simulations along the complete spectrum of the Stribeck curve and beyond. The latter is ensured by a three-level approach to solving the lubrication/fluid field, ranging the mathematical formulations with which those levels are governed from the standard Reynolds equation via the generalized Reynolds equation to the complete Navier-Stokes equations. Thus, with the third level, a seamless transition to simulating even more general thermo-fluid–structure interaction (TFSI) configurations is enabled. Concerning the involved structural/solid field, usually split up in two domains encompassing the lubrication/fluid domain, geometrically nonlinear structural mechanics with various nonlinear material laws is enabled. In particular, detailed (thermal) contact mechanics is taken into account via dual mortar methods with full linearization, representing one of the currently most promising computational approaches to “dry” contact mechanics (i.e., irrespective of any additional lubrication effect). Beyond their use for contact mechanics, mortar methods are, in general, one of the key components of the overall numerical method in being the preferential approach to taking the interfaces within the systems into account. An adequate computational consideration of the involved interfaces, both with respect to surface and volume couplings, is particularly important for accurately simulating coupled multiphysics systems such as tribological systems. After an introduction and a display of selected main features of the method, the presentation of various results from applying it to typical and widely encountered tribological systems such as the ones mentioned above will be the key content of this talk.
Presented By Walter Hinterberger (Magna Powertrain Engineering Center Steyr)
Authored By Walter Hinterberger (Magna Powertrain Engineering Center Steyr)Christian Neubacher (Engineering Center Steyr GmbH & amp Co KG, Magna Powertrain)
AbstractThis presentation describes an automated simulation process for prediction of chip crack zones, vibrational failure of solder joints and thermal stresses of printed circuit board assemblies (PCBAs). Applied in an early phase of development design loops can be saved as well as physical testing and therefore development costs can be reduced.Every investigation starts with the creation of a finite-element model of the whole PCBA using a suitable discretization. For static and dynamic investigations, the simulation effort can dramatically be reduced by use of a material homogenization approach without a significant loss of accuracy. Specifically, the PCB is coarsely discretized and a separate orthotropic material is generated for each finite element, approximating the complex layer structure of the printed circuit board. This significant reduction in the numerical degrees of freedom to be solved makes such systems accessible for dynamic analyses.Inclusion of all surface mounted devices (SMDs) as well as the solder-joints which are modelled using different sub-structuring techniques complete the method. For the solder joints, the geometry of the solder meniscus is automatically generated using a parametric CAD model and a minimal surface approach, taking into account the surface tension of the liquid solder. The resulting PCBA model is then used to calculate chip crack zones or its dynamic behavior for vibrational damage evaluation of each solder joint.Contrary, thermal investigations require a fine discretized PCBA including all layers, traces and vias of the PCB. With sub-modelling techniques one can assess a detailed thermal stress analysis of critical regions (e.g. plated through holes – PTH) in the PCB.In this talk examples for the structural reliability analysis of PCBAs are presented, and how these processes benefit from FEA process automation. Especially the automatic generation of detailed solder joint FE-models as well as detailed FE-models of the PCB based on ECAD-data combined with sub-structuring and material-homogenization techniques reduce the computational effort.
Authored & Presented By Kambiz Kayvantash (Hexagon Manufacturing Intelligence)
AbstractTraditional Optimization, Reliability analysis (UQ) and Robust Optimization (RO) using finite elements and similar discretization methods are costly due to their iterative nature. This leads to adapting strategies which lead to oversimplifications of the design models which may end up with catastrophic failures or lack of quality. This problem is due to the fact that robustness (which may be considered as the inverse of fragility of a given system) cannot be correctly evaluated if small variations and their consequences are ignored. However, traditional RSM (response surface methods or more generally surrogate methods) are not capable of capturing small changes and hypersensitivities of the system response. Various ROM (Reduces Order Model) type solutions have been reported to create fast and sufficiently precise surrogate models allowing for real-time evaluations requiring multiple (looped in case of RO) iterations. This partially solves the problem of numerous and costly simulations without necessarily downgrading the precision and accuracy of the individual model evaluations. A second problem persists in the fact the currently available reliability methods are based on a simple statistical analysis, mainly the first two statistical moments (average and standard deviation) of the analysis runs. This proves to be inefficient in performance due to excessive averaging of the problem but also in terms of the computation runs. At every optimization iteration search of the optimal point a new statistics should be established around the proposed point, requiring a new DOE (design of experiments) in the vicinity and surrounding the new point. This has the effect of multiplying every single function evaluation during the optimization process by a factor of N which is the size of the DOE. We shall remedy this drawback by introducing the concept of Complexity and Entropy. In this paper we shall investigate a novel machine learning methodology, and associated solutions based on the concept of entropy of information.
Presented By Janne Ranta (Dassault Systemes)
Authored By Janne Ranta (Dassault Systemes)Padmeya Prashant Indurkar (Dassault Systemes) Matthias Friedrich (Dassault Systemes)
AbstractPackaging solutions are needed for protecting, distributing, and marketing products. While primary packaging usually encases the product itself, secondary packaging gathers multiple products with their primary packages for distribution. A common choice for secondary packaging is to use corrugated fiberboard because of its advantageous features: cost-effectiveness, high stiffness-to-weight ratio, durability, and eco-friendliness. Even if corrugated fiberboard has been widely used in secondary packaging for decades, a recent development in the field of modeling and simulation has enabled efficient performance-driven design of new packaging solutions. Due to large production volumes, light weighting in packaging by only a few percentages can significantly reduce material usage and CO2 emissions and unlock savings throughout the entire value chain.In this work, we show democratized workflows for performing a box compression test with an empty box made out of corrugated fiberboard. We present easy to adopt and use solutions for both corrugated fiberboard material calibration and virtual box compression test simulation. The corrugated fiberboard is modelled using an orthotropic elastic and anisotropic plastic homogenized composite layup structure. Due to homogenization, we can build existing models very efficiently from both CAD and computational complexity perspectives based on mid-surfaces instead of detailed corrugated fiberboard models. So far, the results from the virtual validation tool built upon finite elements have been promising and well in line with results from a commonly used analytical formula. This increases our confidence to look forward with our implementations and also to study other scenarios, like top load and drop test scenarios. Our solution is compatible with existing design exploration and optimization workflows.The value of democratized workflows lies in the fact that people who have no domain expertise in all highly technical fields that are required in packaging simulation process and method development can conduct complex processes and benefit from them. These fields include paper mechanics/engineering, CAD modeling, material modeling, simulation, data analysis, etc. Naturally, the necessary inputs and outputs are reduced to what the end user typically is familiar with. Hence, in the best case, a successfully designed democratized workflow enables companies to adopt highly sophisticated technology to take full advantage of simulation, perform virtual validation, and optimize packaging solutions, for example. Ultimately, democratized solutions are vital in accelerating routine work of everyone regardless of the level of expertise.
BiographyJanne Ranta is a Senior Industry Process Consultant at Dassault Systèmes, focusing on MODSIM, integrated modeling and simulation. He has over 15 years of experience in engineering and computational mechanics. Janne joined Dassault Systèmes in 2021. Currently, he accelerates businesses by developing and delivering value-driven processes and solutions for a broad range of industries. Janne received his Master's and Doctoral degrees in Solid Mechanics from Aalto University, Finland.
Presented By Sascha Hasenoehrl (Karlsruher Institut für Technologie (KIT))
Authored By Sascha Hasenoehrl (Karlsruher Institut für Technologie (KIT))Johannes Klotz (Karlsruhe Institute of Technology)Marcus Geimer (Karlsruhe Institute of Technology)Sven Matthiesen (Karlsruhe Institute of Technology)
AbstractSystems designed for impact applications, such as rotary hammers or demolition hammers, are employed extensively within the construction industry. The development of such systems requires comprehensive testing, which is essential for acquiring valuable insights during the development process and validating the final product. Conventionally, tests are conducted using materials such as natural stone or concrete. However, tests with such materials present significant drawbacks, including environmental pollution, limited reproducibility due to heterogeneous materials, and the inability to conduct long-term tests, due to destruction. Furthermore, the utilisation of substitute workpieces, such as the system Dynaload, is also associated with limitations. One such limitation is that it lacks the capacity for adjustment to imitate different materials.In order to develop a substitute workpiece that ensures the reproducibility of environmentally friendly and realistic test conditions, it is essential to ascertain precisely which force characteristics the workpiece is subjected to. However, thus far, no suitable method for directly measuring contact forces has been identified. To address this problem, a computer-aided simulation that models the components and their interactions has been created. A rotary hammer was selected as an example. The objective of the simulation is to replicate the force waves generated by the rotary hammer, which leads to the contact forces between the tool and the workpiece.In order to parameterise the simulation, measurements were conducted in accordance with the European Standard for Testing of Mechanical Properties of Materials (EPTA 05/2009). This standard permits the documentation of force waves transmitted through a rod. While direct measurement of contact forces at the tool-workpiece interface is not a viable option, the analysis of these force waves allows for the validation of the transferred energy and momentum changes within the system. The EPTA tool, as defined in EPTA 05/2009, was incorporated into the simulation and employed as a measuring instrument in actual tests. The change in momentum in the EPTA tool was determined from both the measurement data and the simulation and compared with each other in order to assess the degree of correlation between the two. The impact velocity of the flying piston in the simulation was adjusted so that the changes in momentum from the simulation and measurement were in accordance with each other. To validate the simulation, the force waves on the real and the simulated EPTA tool were compared. The resulting coefficient of determination was 83.9%, and the relative standard error was 2.4%. These values demonstrate a high level of agreement between simulation and reality. The simulation of the force waves generated by the percussion mechanism of the rotary hammer and transmitted to the EPTA tool is adequate, allowing for the reasonable assumption that the contact forces between the tool and a specified workpiece are also correctly simulated. In the subsequent step, a virtual substitute workpiece can be modelled to obtain the correct parameter set for different materials.
AbstractAs simulation engineers and subject matter experts, we often focus on solving complex technical challenges within a specific technical or physical domain leveraging a specific simulation approach. However, to maximize the impact and integration of our work, it's essential to understand Systems Engineering (SE) and Model-Based Systems Engineering (MBSE) practices that become increasingly important in 21st century engineering projects. The INCOSE vision 2035 provides a picture of how 21. century engineering will look like in a few years from now Way more model-based and less document based, way more tightly integrated across domains, especially with respect to Simulation. this requires the still often disconnected communities to come together more closely, which is the ambition and charter for the for the INCOSE NAFEMS Systems Modeling & Simulation Working Group for the case of MBSE and system simulation. This talk aims to remove the gap between Simulation Engineers and practitioners and the realm of SE/MBSE by offering insights into how SE/MBSE ideas and practices can enhance collaboration between engineers and improve (simulation) project outcomes. Attendees will learn how adopting SE and MBSE concepts and approaches can help to manage complexity, ensure traceability, and integrate their simulation models more effectively within larger system architectures and system models. By understanding core SE and MBSE principles, workflows, and tools, simulation engineers can improve communication with cross-functional teams, contribute to system-level requirements and technical concepts, and elevate the impact and value of their work in modern engineering projects. Specifically, we will cover and provide insights on Systems engineering and Model-Based Systems Engineering (MBSE), key SysMLv2 concepts relevant for simulation engineers such as Analysis, VerificationAnalysis, and TradeStudy, integration of simulation within SE and MBSE Frameworks, skills required for simulation engineers working with SE/MBSE personnel, and provide an outlook on future trends and the evolving role of simulation engineers in the context of SE & MBSE.
Presented By Thibault Jacobs (Noesis Solutions NV)
Authored By Thibault Jacobs (Noesis Solutions NV)Srdjan Djordjevic (Cadence Design Systems)
AbstractThis study proposes a systematic and automated methodology for optimizing MOSFET (Metal-Oxide-Semiconductor Field-Effect Transistor) power modules, essential components in high-power electronic systems requiring efficient operation and robust thermal management. The approach integrates multi-domain simulations into a unified optimization framework, enabling simultaneous improvements in electrical and thermal performance, and addressing key challenges in power module design, such as managing high voltages, currents, and power consumption. The optimization targets critical design parameters, including MOSFET placement, wire bond diameter, and substrate layout, to minimize parasitic inductance and thermal resistance while maximizing power-handling capacity. The process begins with schematic-driven design entry, utilizing tools like Cadence Allegro PKG Designer, which ensures accurate structural definition and seamless integration with subsequent analysis steps. Parasitic extraction is conducted using Clarity 3D quasi-static solvers, and the results are incorporated into PSPICE time-domain simulations to evaluate switching behaviors, voltage and current waveforms, and power losses. Thermal analysis using Cadence Celsius assesses junction temperatures, current density, and heat dissipation to identify and mitigate potential thermal bottlenecks, ensuring reliable operation under high-power conditions. Moreover, S-parameter extractions identify electromagnetic coupling between power paths and gate paths, uncovering potential resonances and electromagnetic compatibility issues that could degrade module performance, particularly at higher switching frequencies. By integrating these analyses into an iterative optimization loop, the framework achieves measurable performance improvements. Automated workflows implemented using Optimus significantly streamline the design process, enabling efficient exploration of design trade-offs. The optimization of wire bond parameters, MOSFET positioning, and substrate layout enhances both electrical and thermal performance, leading to reduced power losses, improved thermal dissipation, and increased operational reliability. This work demonstrates the advantages of a holistic, simulation-driven approach for power module design. The proposed framework balances electrical and thermal considerations, offering a scalable and efficient methodology applicable to automotive, telecommunications, and industrial electronics. The findings underscore the potential of automated design optimization to meet the rigorous demands of modern high-power electronic systems, advancing the development of high-performance power modules.
Presented By Daniel Campos Murcia (DatapointLabs Technical Center for Materials)
Authored By Daniel Campos Murcia (DatapointLabs Technical Center for Materials)Brian Croop (Applus+ DatapointLabs)
AbstractThe accurate calibration of materials models is crucial for simulating the behavior of materials across various industries, including automotive, aerospace, and consumer goods. With the increasing complexity of modern materials, particularly polymers, foams, and composite materials, developing reliable and efficient calibration strategies is more important than ever. This paper presents a comprehensive comparative analysis of calibration strategies for material models applied to these materials, focusing on the challenges and best practices for each material class. The study explores the calibration and validation of three specific models: the Fu Chang model for foams, the SAMP model for polymers, and composite damage models for continuous fiber-reinforced thermoplastic composites (CFRTPs). The research integrates experimental testing campaigns, such as tensile, shear, compression, and impact tests, under varying strain rates and environmental conditions, to collect detailed data that are essential for effective model fitting.For polymers, the semi-analytical SAMP model is calibrated using a combination of advanced testing techniques, including high-speed video analysis and digital image correlation (DIC), along with optimization algorithms. This approach allows for the precise characterization of strain-rate dependencies and the evolution of damage under different loading conditions. In foams, the calibration process focuses on the accurate representation of stress-strain curves and material rate sensitivity, providing valuable insight into robust calibration practices that account for dynamic loading and high-strain-rate phenomena. For CFRTPs, the study addresses material modeling at both macro and meso scales. At the laminate level, the study emphasizes the modeling of nonlinear behavior, including strain-rate effects, making the material models suitable for impact applications. At the lamina level, calibration highlights the importance of fiber orientation and interface modeling, which significantly influence the material's performance under complex loading conditions.This paper highlights the unique challenges posed by each material class, including strain-rate sensitivity, failure evolution, and the complexities involved in multi-scale modeling. It also discusses the importance of flexible calibration workflows that can adapt to different material behaviors and loading scenarios. The integration of advanced optimization tools is crucial for achieving reliable predictions in dynamic applications after the calibration process. The findings of this research offer valuable insights into the optimization of calibration strategies and provide practical guidance for selecting the most suitable fitting processes.
Presented By Randhir Swarnkar (TimeTooth Technologies)
Authored By Randhir Swarnkar (TimeTooth Technologies)Ashwin Vasudeva (Timetooth Technologies Pvt Ltd, India) Gagan Sharma (Timetooth Technologies Pvt Ltd, India) Girish Mudgal (Timetooth Technologies Pvt Ltd, India)
AbstractTimetooth Technologies has developed landing gears for drones and UAVs, having optimum energy absorption capabilities. In conventional methods, physical tests are conducted to validate the performance, and then the test results are used to fine-tune the design. However, such tests are time-consuming and costly. The current study focuses on the design of landing gear by leveraging advanced Computer Aided Engineering (CAE) simulations, and predictive modeling approaches to accurately capture the entire mechanics involving structure and fluids. A judicious combination of CAE tools along with application of machine learning algorithms, the intricate landing gear system was designed efficiently and effectively, in a simplified manner.The efficient dissipation and absorption of energy during landing rely primarily on two key properties: stiffness and damping. Stiffness property is achieved by pre-charged gas while damping phenomena is achieved by restricted flow of fluid across orifice during stroke. The multibody dynamics model incorporated essential landing parameters such as the vehicle's weight, vertical sink rate, horizontal landing speed, and allowable ground reaction factor, while capturing the structural deformation of critical components. To optimize the performance of the landing gear, variables such as gas stiffness, orifice radius, and tire stiffness were considered and treated as adjustable parameters.For modeling damping characteristics, a novel approach that combines Computational Fluid Dynamics (CFD) simulations with machine learning techniques is introduced. Approximately 500 CFD simulations were performed for various orifice radii to obtain the corresponding force-velocity characteristics, which were then used to train a machine learning model. The trained and validated model was employed to predict the orifice radius corresponding to a desired set of force-velocity characteristics. This method provides a more efficient way to design damping systems, significantly reducing the need for multiple iterations of CFD simulations.Dynamic energy absorption characteristics of the landing gear were validated through the physical drop tests. The correlation between the design parameters and test data was found to be remarkably good, indicating the accuracy and reliability of the ML augmented CAE simulations. This high level of correlation demonstrated the effectiveness and efficiency of the design process that captures multiple physical phenomena.This approach demonstrated several advantages, including convenience, cost-effectiveness, and time efficiency. The findings of this study make valuable contributions to the advancement of landing gear design and the augmentation of machine learning with CAE simulations in the field of aerospace engineering.
BiographyHe earned his Bachelor of Technology degree from the Indian Institute of Technology (IIT) Ropar in 2017. During his B.Tech program, he undertook an internship at TU Munich, supported by a DAAD,IAESTE grant in Materials Engineering. Following his graduation, he joined Timetooth Technologies, India as an Analyst. His specialization centered around Multibody Dynamics analysis, and he contributed to a range of projects spanning the aerospace, medical, and defence sectors. His role primarily involved the design, development, and qualification of innovative, turnkey products. His notable achievements include: • Designing, developing, and qualifying Landing Gear systems for UAVs in accordance with FAR 23 standards. The successful completion of the RLV-LEX mission this year stands as a testament to his expertise in this area. • Spearheading the design, development, and qualification of Landing Gear systems for other indigenous UAV programs, leveraging the power of CAE simulations. Currently, he holds the position of Project Manager at Timetooth Technologies, where he is at the forefront of designing and developing a military exoskeleton. This encompasses a wide range of engineering disciplines, showcasing his proficiency in interdisciplinary engineering solutions.
Authored & Presented By Cem Özgür Kösemek (ATS)
AbstractC. Koesemek (AT&S), S. Waschnig (AT&S), Brenda L. (AT&S), Stadlhofer Barbara (AT&S), M. Frewein (AT&S), P. Fuchs (PCCL).Integrated Circuit Substrates (ICS) are critical in modern electronics, supporting higher speeds and greater functionality as devices become more compact. The trend towards miniaturization and embedding components within IC substrates is driven by the demand for smaller, more powerful devices. This requires precision in manufacturing and innovations in fabrication techniques to address challenges like thermal management, mechanical stress, and signal integrity. Therefore, Embedded Component Packaging (ECP®) technology leads this trend by embedding components within the IC substrate layers, saving space and enhancing performance. However, miniaturization increases mechanical stress and warpage, affecting reliability. Simulations can help optimize the layer structures and material compositions to achieve the best performance with minimal warpage. Finite Element Method (FEM) simulations are essential for predicting and minimizing warpage in IC substrates, ensuring the reliability and performance of the final product. This study employs Finite Element Method (FEM) simulations and experimental measurements to analyze and predict warpage behavior in IC substrates with embedded component. A test vehicle was designed considering different embedded component configurations in order to validate the simulation by test. The investigation utilized several warpage measurement methods, including digital image correlation (DIC), shadow moiré, and 3D microscopy. The experimental results were then compared with the simulation results. Experimental data is collected to determine the local undulation level on the area of interest surface as well as the overall warpage level of the IC substrate. The IC substrate and embedded component were modeled using an advanced modeling technique known as the material homogenization approach. Using the material homogenization approach, embedded components were integrated by modeling three different materials within the same layer. This is a crucial part of the study, as these modeling methods enable the creation of a 3D solid model for complex IC substrate designs.The simulation results were compared with data from three different measurement methods. The strong agreement between the measured data and the simulation results clearly demonstrates that the IC substrate with embedded components can be effectively simulated using the presented material homogenization approach. Further work will also focus on the assessment of resin flow to scale the method to even more complex ICS structures including more build-up layers.
18:20
Presented By Massimo Galbiati (Particleworks Europe)
Authored By Massimo Galbiati (Particleworks Europe)Sabine Wenig (Sika Automotive AG)
AbstractThe growing shift towards electric vehicles and the imperative to reduce CO2 emissions are driving the automotive industry towards lightweight vehicles. For both gasoline and hybrid vehicles, decreasing weight significantly enhances fuel efficiency and lowers CO2 emissions. In electric vehicles, weight reduction is especially important for extending driving range.The vehicle body contributes around 40% of the total weight, and the use of aluminum profiles and castings is increasing to help reduce this weight. However, bonding these components effectively is challenging because traditional adhesives are inadequate for such applications. As a solution, innovative adhesives have been formulated specifically for these needs. These adhesives are injected into joint gaps, flowing along designated channels that cannot be sealed laterally due to necessary joining tolerances. They are engineered to flow within these unsealed channels without intruding into adjacent gaps. A key property of these adhesives is their special rheological behavior, which is essential for achieving the desired bonding performance.In this context, simulation is crucial for predicting adhesive behavior, including its spreading time and minimizing unwanted intrusion into lateral gaps. However, simulating adhesive injection presents several challenges, such as accurately defining non-Newtonian and temperature-dependent viscosity, managing transient free surface flow in complex geometries, and addressing heat transfer between adhesive and surrounding walls. To tackle these challenges, a Moving Particle Simulation model has been developed and validated. Moving Particle Simulation (MPS) is a meshless CFD technique designed to solve the Navier-Stokes equations. It was initially created for transient free surface flows and employs a fully implicit solver for simulating highly viscous non-Newtonian materials. MPS can handle heat transfer and accounts for viscosity changes with temperature, making it well-suited for simulating adhesive behaviors.The MPS model has been developed using the same geometry and conditions as a test channel located at the SIKA AUTOMOTIVE AG facilities. This model simulates the flow within the main channel, and two key outputs are compared with experimental measurements: the time taken to reach a specified distance from the injection point and the extent of intrusion into lateral gaps. Simulation and experimental tests are conducted for various channel geometries and different heights of the lateral gaps.The analysis of simulation outcomes and measurements shows that MPS can reliably forecast the adhesive injection process and how it spreads, as long as the characteristics of the fluid are correctly specified and the impact of fluid temperature on viscosity is precisely calculated throughout the adhesive route. The research pointed out that the simulation of real car components could require extensive computational time. However, this issue can be mitigated by the MPS approach's high efficiency in utilizing multi-GPU computing.The paper presents the Moving Particle Simulation technique, the model based on the SIKA AUTOMOTIVE AG geometry and conditions, along with a qualitative and quantitative analysis of the simulation outcomes compared to experimental data for three test scenarios.
Authored & Presented By Marko Thiele (Scale)
AbstractEfficient product development, particularly for complex systems like vehicles, relies increasingly on a seamless integration of digital tools and processes. Simulation Data Management (SDM) plays a pivotal role in virtual product development by providing a centralized platform for managing, sharing, and automating key aspects of the development cycle. SDM ensures that all relevant data – ranging from simulation models and results to physical testing outcomes – are consistently available, fostering collaboration and enabling faster, more reliable decision-making across teams and locations.This presentation explores how a Systems Engineering approach, combined with a Simulation Data Management (SDM) platform and Model-Based Systems Engineering (MBSE), supports the efficient development of complex products, such as vehicles. It highlights 1. the integration of MBSE methodologies, focusing on requirement definition and verification processes to streamline development and ensure alignment with technical requirements. Special attention is paid to traceability, which covers the entire process from requirements through verification by simulation and physical test to evaluation and reporting of the project status. 2. the role of an SDM platform in managing and sharing all relevant data, including simulation models, processes, results, and physical testing outcomes, while fostering collaboration across teams. It will show how CAE models and all types of process scripts are created and maintained by a small number of experts for use by all users across the Group's various brands and partners, ensuring smooth operation regardless of the location of the engineers, and enabling effective collaboration.3. the efficiency gains through the automated evaluation of key results, which are compared directly to the predefined project requirements within the SDM system's user interface. This streamlines the process, reducing manual intervention and ensuring accuracy. Also, the automation of report generation is shown for simulation and physical test results which improves productivity, enabling quicker insights and decision-making.Using real-world examples, the presentation demonstrates how these tools and methodologies enable the whole simulation process as well as the definition of requirements, validation of sub-systems, and verification of the entire product through both simulation and physical testing. This integrated approach ensures a comprehensive and automated framework for virtual product development.
AbstractThermoplastic and thermoset materials are extensively used in transportation, machinery, consumer products, and heavy-industry where material performance and high-rate impacts are crucial to characterize and design for. Using simulation driven design to predict product performance is critical to reducing design iterations, reducing time-to-market, and improving performance. Performing accurate Finite Element (FE) simulations requires testing the time-dependent behavior of thermoplastic materials due to their inherent complex viscoelasticity and viscoplasticity. In addition to changing material behavior with strain-rate, polymers exhibit stress-state dependent failure behavior, most often characterized by stress triaxiality (defined as the ratio of the pressure to the von Mises stress). Fully characterizing a material’s behavior requires testing the material at multiple strain-rates and in multiple loading modes. The multiple loading modes are required to measure the pressure-dependence of the yield surface as well as stress-triaxiality dependent failure surface. In this work we will present the results of a study on TPO where we tested the material from quasi-static strain rates (0.001/s) to impact rates (1000/s), utilizing a drop tower for the high-rate tests. We test the material in multiple test modes to measure the pressure dependent failure behavior, including: high rate shear with samples similar to ASTM B831, modified to be smaller and appropriate for a drop tower; a dart impact test similar to ASTM D1709; and high rate notched tension tests at to measure the failure locus. The is in addition to low strain rate failure data in uniaxial tension to characterize the rate-dependence of the failure locus. We use this data to select and calibrate a rate-dependent material model for use in explicit FE simulations. We selected the SAMP-1 material model, calibrating the material model using single element FE simulations to optimize the material parameters. After calibrating the material parameters, we then perform FE simulations of the shear, dart impact tests, and notched tension tests using an inverse calibration approach to optimize the material model failure parameters. We will present the results of the simulations, showing good correlation between the material test results and FE simulations, yielding an accurate and reliable material model for use in explicit simulations that include material failure.
Presented By Markel Alaña (Ideko S Coop)
Authored By Markel Alaña (Ideko S Coop)Julen Bastardo (IDEKO) Gorka Aguirre (IDEKO) Harkaitz Urreta (IDEKO) S. G. Ferreira (IDEKO)
AbstractThe machine tool industry increasingly demands higher precision and performance, and rotary tables with hydrostatic guideways are well-suited to meet these requirements due to their low friction, high stability, and excellent load-bearing capacity. Understanding the interplay between structural deformation and hydraulic pressure in these systems is critical for optimizing their design and performance.The structural design of the table and the hydrostatic guiding parameters play an important role in the performance of the table. Size constraints should be imposed to limit the cost of the part and hydrostatic constraints to ensure sufficient stiffness of the guiding system, which is crucial to ensure accuracy in machining. Designing these parts requires a detailed exploration of the design space, focusing on the bidirectional coupling between structural deformation and fluid pressure that governs the system’s mechanical and hydraulic behavior.This study presents a custom-developed finite element simulation tool that integrates structural and hydraulic physics into a monolithic scheme to analyze these coupled effects. The structural components are modeled with linear elasticity, while the hydraulic pressure field is calculated using the Reynolds equation for lubrication, derived from Navier-Stokes under assumptions of a thin fluid film and laminar, incompressible flow. The model incorporates nonlinearities arising from the dependence of pressure on the gap between support surfaces and fluid viscosity changes with temperature. An iterative approach is used to linearize and solve the system of equations.The studied rotary table is made of cast iron, has a diameter of 5000 mm and two hydrostatic support rings. On the one hand, the influence of the elasticity in large hydrostatic tables is assessed by comparing the presented model with an infinitely rigid table. On the other hand, the design parameters of the hydrostatic system are analyzed to study their influence in load capacity, stiffness, gap, and pump power, and a design of experiments is conducted to detect the sensibility of each parameter in the performance of the table. This approach offers a robust framework for designing cost-effective, high-performance rotary tables tailored to specific industrial needs.
Authored By Henan Mao (Ansys)Jimmy He (Ansys Inc.) Tieyu Zheng (Microsoft Corporation)
AbstractThe growing complexity of Ball Grid Array (BGA) packages, driven by advancements in AI and high-performance computing, has introduced significant mechanical challenges in electronic packaging. Larger chips and higher pin counts amplify stress concentrations and deformation risks, making solder joint reliability a critical factor in ensuring package performance and durability. Solder reflow simulation, an emerging technique, provides essential insights into solder joint behavior, enabling engineers to predict solder joint shapes, assess potential defects, and optimize designs.Traditionally, the analysis of solder reflow in large BGAs was not included in simulation workflows due to the lack of commercial tools capable of addressing this challenge. The introduction of the ISPG method in Ansys LS-Dyna marked a breakthrough, enabling accurate simulation of the solder reflow process. However, setting up such simulations for large-scale models remains a significant challenge. The process of defining geometry, creating meshes, and applying boundary conditions demands extensive manual effort, often taking hours for models with thousands of solder joints. This labor-intensive setup slows iterative design cycles, delaying the identification and mitigation of reliability risks.This work presents an innovative approach to addressing the challenges of large-scale solder reflow simulation. By automating the setup process through advanced scripting tools, our workflow reduces input generation time to under 10 minutes, even for models exceeding 4,000 solder joints. The workflow incorporates key steps such as geometry creation, meshing, boundary condition application, and parameter configuration with minimal user intervention. Post-processing tools further enhance efficiency by providing detailed metrics, including solder joint height, diameter, and defect-prone regions based on predefined failure criteria.This automated methodology empowers engineers to rapidly iterate on BGA designs, delivering actionable insights early in the development cycle. By reducing reliance on manual setup processes, it enables faster, more accurate evaluations of solder joint reliability. These advancements not only enhance design efficiency but also support the development of robust, reliable packages for next-generation electronic systems.By offering a scalable and effective solution for solder reflow simulation, this workflow bridges the gap between the growing demands of complex electronic packaging and the need for efficient predictive modeling techniques. This innovation establishes a foundation for advancing reliability analysis in cutting-edge electronics manufacturing.
09:00
Authored By Yashwant Liladhar Gurbani (Rolls-Royce Group PLC)Marco Nunez (Rolls-Royce plc) Jon Gregory (Rolls-Royce plc) Shiva Babu (Rolls-Royce plc)
AbstractGeometry modelling plays a crucial role for the computational representation of a given system and to obtain predictions about its performance, behaviour, etc. through simulation. A range of numerical techniques can be also coupled to enable the execution of design space exploration studies for the identification of optimal designs based on a specified number of objectives and constraints. This is typically achieved through parametric geometry models, with a set of predefined design parameters used to control the dimensions and shape of the model. Currently, BRep (Boundary Representation) is amongst the most common geometry modelling methods used in industry for engineering applications, relying on a mathematical representation of 3D shapes through a collection of basic geometry elements describing the boundaries of its volume. Nonetheless, a number of limitations are associated with such an approach, including in particular:- The level of control in manipulating geometry (i.e. to what extent the design space can be explored, allowing for the assessment of designs radically different to each other and to an original baseline).- The simulation time required for the assessment of a single design point, especially when it comes to Fine Element Analysis or Computational Fluid Dynamics simulations of complex systems.- The difficulty in re-using existing data (i.e. leveraging on information obtained from previous simulations).- The difficulty in addressing at run-time geometric and topological errors deriving from the geometry manipulation and leading to problems with accuracy, scaling, and meshing amongst others. Alternative modelling techniques have been investigated with the aim of providing more freedom and flexibility to create and manipulate the geometry of a system. Whilst some of these have proven to successfully address specific limitations associated with BRep models, further work is required to use them for simulation purposes. This paper presents a novel approach based on the use of neural networks, which offer the possibility of having a single model containing both the geometrical representation of the system under study, in conjunction with the information of its performance against design metrics of interest (e.g., stress field, flow field, displacements, temperatures, etc.). A key advantage from such approach is that it allows training the neural network model with legacy simulation data to then offer a semi-instant prediction for the performance associated with synthetically generated design candidates.
Presented By Jack Reijmers (Nevesbu)
Authored By Jack Reijmers (Nevesbu)Pieter Nobel (Nevesbu) Alessandro Zambon (Nevesbu)
AbstractRing-stiffened cylinders under hydrostatic pressure are subjected to several modes of collapse. For instance, collapse can be caused by a structural detail that experiences high stresses. Such a collapse mode can easily be avoided by minor modifications in the design. Another collapse mode consists of the tripping of stiffeners, showing rotation of the ring stiffener about the tangential axis. Prevention of this mode requires a sound choice of the stiffener dimensions with minor impact on the total weight. With respect to the collapse modes considered here, it can be stated that the ultimate solution in nonlinear Finite Element Analyses (FEA) may show large areas in yield condition without a clear indication on where the plastic capacity is lost first, and the collapse is initiated. Considering large areas in yield the most important collapse modes consist of: (i) yielding of the shell (interframe collapse), or (ii) yielding of the ring frame (global or overall collapse). Regarding (i): when considering weight optimization, it must be noted that the shell contributes the most to the total weight, and this emphasizes the need to focus on minimum thickness. This draws the attention to interframe collapse; however, global collapse cannot be decoupled from shell behaviour. A substantial part of the shell participates in the bending stiffness of the ring frame and yielding of the shell decreases this bending stiffness. Regarding (ii): if the ring stiffener starts to yield first, then the shell’s support decreases, and this initiates yielding of the shell. In the FE simulations, a close observation of the deformation and resulting stress as the applied pressure increases gives a better understanding than analysing the stress situation at the ultimate pressure. For twelve pressure hull geometries, radial deformation and von Mises stress distribution versus hydrostatic pressure are assessed, and the nature of collapse (global or interframe) is established. These geometries comprise pressure hulls that are designed to fail by global collapse, as well as designs that aim for minimum weight and therefore minimum thickness. Analytical methods that can be found in the guidelines of Classification Societies are also considered. These methods make a clear distinction between the two collapse modes under consideration, and an interaction between collapse modes is not taken into account.Nonlinear FE analysis shows the response in the shell and the ring stiffener to the hydrostatic pressure and the resulting plots reveal in most cases the nature of collapse. However, the analytical methods produce misleading results. In some cases, the predicted collapse pressures are close to the ultimate pressures given by FE analyses, but the corresponding collapse modes are incorrect. For interframe collapse, it is obvious that the analytical approach gives erroneous results since this method is based on axisymmetric behaviour. Out-of-Circularity produces additional bending stresses between the frames that are not covered by the analytical methods. This gives non-conservative predictions of interframe collapse pressures. Analytical methods to analyse bending stresses in the ring-frame do incorporate the Out-of-Circularity, but the difference with the nonlinear FE analysis outcomes is significant.Understanding the nature of collapse follows from carefully observing: von Mises stress plots at the ultimate pressure; the progress of radial deflection and stress towards the ultimate pressure. The latter is imperative to distinguish the influence of the stiffener and the shell mid-bay on the collapse. This distinction is also important to review the quality of the analytical approach to predict stiffener collapse versus interframe collapse. This presentation will demonstrate the discrepancies of the analytical formulations on predicting the collapse pressures of pressure hulls, by a comparison to FEA.
Presented By Kevin Posch (Magna Steyr Fahrzeugtechnik)
Authored By Kevin Posch (Magna Steyr Fahrzeugtechnik)Severin Stadler (Magna) Michael Mandl (Magna) Nicola Di Nardo (Magna)
AbstractThe importance of mitigating vehicle soiling extends beyond mere aesthetics, as it can significantly impair drivers' vision and the functionality of critical sensors. Addressing this issue early in vehicle development is crucial, necessitating advanced simulation techniques. Magna has developed a hybrid simulation approach that integrates with wind tunnel tests to form a core component of the continuous aerodynamic development process. The first part is a simple complete vehicle soiling simulation. The simulation model developed by Magna utilizes approximately 200 million cells to precisely track boundary layer flow using the low Y+ approach and a Detached Eddy Simulation (DES) turbulence model. This detailed modeling is essential for capturing the near-wall regions accurately, which are crucial for soiling simulations. In this model, the Lagrangian phase, representing particles, is coupled one-way with the Euler phase, representing the air. This allows for a simultaneous evaluation of the aerodynamic and soiling performance of the vehicle. The incremental computational effort, amounting to a 25% increase in simulation time, is justified by the depth of insight gained. This capability facilitates early decision-making regarding the positioning of sensors and cameras, optimization of rear window and light cleanliness, and assessment of soiling on side walls and door handles. Notably, the simulation's ability to detail particle/droplet trajectories offers a significant advantage over physical tests, enabling precise identification of soiling sources and informed development of mitigation strategies.For in-depth analysis of specific regions, the use of submodels becomes necessary. These submodels build on the data from the complete vehicle soiling simulation, focusing on localized phenomena. For example, the soiling and its effect on the side mirror and adjacent side window. Such detailed examinations require an elevated degree of modeling accuracy, introducing two additional Euler phases to the simulation: the fluid film model and the Volume of Fluid model.The fluid film model, a 2-D approximation for wall-bound water, performs well on slightly curved surfaces and sharp convex edges when enhanced by additional detachment models. However, its approximations fail on strongly curved surfaces, like the rear edge of side mirrors. Here, the Volume of Fluid model offers superior accuracy by depicting detailed flow characteristics, albeit with increased mesh and temporal resolution requirements, leading to higher computational demands.Together, these models form a hybrid simulation approach. This method is especially advantageous during early vehicle development phases, where iteration cycles are rapid. In conjunction with the four phases, consideration of the surface contact angle, influenced by material properties, is essential. In summary, Magna's hybrid simulation technique for vehicle soiling, validated through wind tunnel tests, offers a robust and efficient solution during the early stages of vehicle development. It not only aids in maintaining vehicle aesthetics but also ensures the functionality of safety-critical sensors, thus enhancing overall driver safety and vehicle performance.
Presented By Lennart Toenjes (DLR - Deutsches Zentrum für Luft- und Raumfahrt)
Authored By Lennart Toenjes (DLR - Deutsches Zentrum für Luft- und Raumfahrt)Sascha Daehne (German Aerospace Center) Christian Huehne (German Aerospace Center)
AbstractMultidisciplinary optimization (MDO) has emerged as a critical approach in the design of aircraft wings, enabling the simultaneous consideration of aerodynamic performance and structural integrity. This integrative method is essential in addressing the complex interactions between disciplines, optimizing the overall wing performance rather than focusing on isolated aspects. A keystone of this process is the accurate determination of interdisciplinary gradients, specifically between the structural and aerodynamic domains. These gradients, which represent sensitivities to changes in the wing geometry, are crucial for guiding gradient based optimization algorithms toward optimal solutions.Traditional methods for retrieving these sensitivities often employ free-form deformation (FFD) techniques. Although Free Form Deformation allows modifications to the outer wing shape, accounting for changes to the internal structural elements, such as rib and spar positioning or rotation, remains challenging. This constraint narrows the scope of optimizations, potentially overlooking crucial design improvements achievable through concurrent modifications of both outer and inner wing configurations. To bridge this gap, recent efforts have turned toward CAD-based approaches that integrate finite difference methods to compute the required sensitivities. However, finite difference methods are known to suffer from numerical inaccuracies due to truncation and discretization errors. These inaccuracies propagate through the optimization process, leading to imprecise sensitivities and, consequently, slower convergence rates.In this study, we propose a novel methodology utilizing algorithmic differentiation (AD) to retrieve more precise interdisciplinary gradients. AD, unlike finite differences, computes exact derivatives by systematically applying chain rules of differentiation, thereby eliminating numerical errors. The approach utilizes a differentiated CAD kernel to obtain exact sensitivities of node coordinates with respect to geometric design parameters. The optimizer subsequently uses these geometric sensitivities to adjust the wing's geometrical design parameters. The updated node coordinates are then mapped to the newly generated wing shape in each iteration, ensuring a seamless integration between the geometric model and the optimization process.For demonstration purposes and to minimize computational efforts, simple formulas are employed to estimate the lift-to-drag ratio of the wing shape. This approximation ensures that the focus remains on evaluating the effectiveness of the AD-based CAD sensitivities without excessive computational overhead.The improved accuracy and efficiency of the proposed method have significant implications for the design and optimization of aircraft wings. Precise sensitivities enable a more efficient exploration of the design space, facilitating innovative solutions that improve aerodynamic performance while maintaining structural robustness. This method also addresses the limitations of existing techniques, offering a comprehensive framework that seamlessly integrates the geometric, aerodynamic, and structural aspects of wing design. By doing so, it facilitates faster and more efficient MDO processes, contributing to advancements in aerospace engineering.
Presented By Felipe Robles Poblete (University of Maine)
Authored By Felipe Robles Poblete (University of Maine)Britt Helten (University of Maine - Advanced Structures & amp amp amp amp amp amp Composites Center) George Scarlat (University of Maine - Advanced Structures & amp amp amp amp amp amp Composites Center)
AbstractThis paper presents research conducted at the University of Maine (UMaine) Advanced Structures and Composites Center (ASCC) on the effects of material model temperature and time dependence on numerical model predictions for polymer material extrusion additive manufacturing processes. This information is aimed to give insight to designers, analysts, and engineers so that informed decisions can be made considering time, funding, and resource allocation. The numerical finite element analysis (FEA) models were built in Abaqus 2024 with a sequentially coupled thermo-mechanical analysis combined with an Additive Manufacturing (AM) module. Three benchmark models, namely Ring, Thermometer, and W-Shape, were chosen for this study in order to accentuate different modes of deformation. The material selected was neat polyethylene terephthalate glycol (PETG) as it is a matrix material that have been used in additive manufacturing applications and its properties could be readily obtained. The choice of a neat material was justified to simplify the analysis by neglecting direction effects on the material properties. The material model utilized in this work was obtained from experimental characterization results. The temperature-dependent material property data were fitted by bilinear curves derived from a least square function available in a Python library (pwlf). The time-dependent material property data were given in terms of master curves and shift factors. The derivation of those curves set up the parameterization of the material model, enabling the study of model sensitivity as the components of those curves could be varied. Specific heat, elastic modulus and coefficient of thermal expansion (CTE) temperature dependence and elastic modulus time dependence (viscoelasticity) were the primary parameters of interest. The remaining material properties and process parameter values, such as deposition and ambient temperature, were kept constant.Maximum deformation and maximum von Mises stress were the key performance indicators (KPI) evaluated and used to provide recommendations for modeling considerations. When applicable, SIMULIA iSight 2024 was employed in the sensitivity studies conducted on the benchmark models. Direct deflection comparisons resultant from the use of different material models were made as well as the evaluation of the impact of the material properties on the KPI values through the derivation of Pareto plots. In terms of sensitivity investigations, comparison between models using constant and temperature-dependent material models were made. Similarly, some parameters that are part of the viscoelastic model, namely relaxation terms and time constants, were varied to gauge its impact on the same KPI values. In this case, because of the constraint imposed by Abaqus on the use of viscoelasticity with AM module, a User Material Subroutine (UMAT) written in FORTRAN was developed so that the viscoelastic material property could be used with progressive element activation. Finally, a more comprehensive study was also conducted, in which temperature and time dependent combined effects were compared across the benchmark models.
BiographyDr. Felipe Robles Poblete is a Research Engineer within the Advanced Structures and Composites Center at the University of Maine, USA. His current research interests are focused on performance prediction through process simulation and design space exploration of large-scale additively manufactured composite structures through sensitivity analysis and material characterization. Dr. Robles holds a PhD degree in Mechanical Engineering from North Carolina State University, NC, USA. Before holding a Postdoctoral position at the University of Maine, Dr. Robles previously worked in the biomechanical industry at Align Technology in the areas of design, simulation and optimization of mandibular advancement products.
Presented By Dariusz Niedziela (Fraunhofer ITWM)
Authored By Dariusz Niedziela (Fraunhofer ITWM)Konrad Steiner (Fraunhofer ITWM)
AbstractThe production of high-quality foam components for insulation or comfort is a very demanding task, especially when it comes to homogeneous filling of complex assemblies. It is crucial to choose the right amount of injected material, the correct injection position, set accurately thermal conditions, and to place the vents in the proper locations to achieve the desired final product quality. To avoid time-consuming and costly experimental optimization loops, simulation is the smart choice. One crucial aspect is the correct prediction of foam expansion which requires a suitable modelling approach, stable numerical solver and correct model input parameters describing the foaming material. Since foam rheology is a very complex, fulfilling all these parameters is not an easy task. We present a foam simulation technology, addressing both the issues of proper model parameter identification and foam filling in complex parts. We will discuss an automated material parameter identification tool that allows fast material parameterization based on simple foam expansion experiments. The identified parameters are then used directly in the foaming simulations of the complex components and stored in a foam material data base. In addition, we will demonstrate a method to simulate very large and complex parts using an effective porous media approach to accurately resolve small scales. The effective porous media approach substitutes small parts and/or same fine structures by the local permeability. Large parts like battery packs with hundreds of cells can be calculated ten times faster on coarser grids with the same accuracy. Finally, we show how the digital twin can be used to predict and control the thermo-mechanical product quality of foam components. The digital twin begins by simulating the foaming process to determine the local density and pore size distribution of the foam component. Based on this, a foam database for different densities and pore sizes is dynamically created. This step relies on microstructure simulations to determine the effective thermo-mechanical behaviour of the foam for every density.Using the data base results from the microstructure simulation together with the local information from foam process simulation, the component design of the foam parts can be optimized using a standard FE-tool, considering the different local material properties, which depend on the process conditions.Some simulations of various applications, including the isolation of battery packs or the foaming of textiles to produce reinforced lightweight structure as well as refrigerators, seats or insulation panels will be presented and discussed to showcase the capabilities of digital prediction software.
Authored & Presented By Klaus Wolf (Fraunhofer SCAI)
Presented By Naghman Khan (SimScale)
Authored By Naghman Khan (SimScale)David Heiny (SimScale GmbH) Kanchan Garg (SimScale GmbH)
AbstractThe unique properties and challenges associated with hydrogen require emphasizing safety considerations in design and operation. The authors will present several simulation case studies showing both intermediate and advanced capabilities for derisking the complex hydrogen supply chain using a cloud-native simulation platform. These features enable engineers to efficiently test and refine designs, ensuring reliability and compliance with industry standards.The Hydrogen industry is trying hard to scale H2 production, optimize transport and utilization as an alternative to fossil fuels. Many are critical of H2 being a dominant energy vector in a decarbonized economy given the cost and safety considerations, however. Many pilot projects are focused on developing renewable generated H2, improving the process efficiency for example in electrolyzers and desalination and developing alternate technologies such as biohydrogen fermentation. Likewise, making H2 safe and cost effective at the point of use remains a challenge and requires efforts to reduce energy costs of cooling, liquefaction & gasification, improve storage efficiency & material durability and research new storage technologies. In this study, the main types of engineering simulation including computational fluid dynamics (CFD), finite element analysis (FEA), thermal management techniques, and electromagnetic analysis are applied to hydrogen systems. These systems include hydrogen tank filling, transport, pressurized vessel testing, H2 fuel cell optimization, pumping and valve components and sealing/gaskets. Furthermore, the analyses are amplified with specific solvers and techniques that leverage standard H2 REFPROP properties and incorporate advanced models for heat transfer and gas mixing.l . The workflow is augmented using parametric simulations in the cloud for generating data to train a machine learning model for rapid AI-powered simulations at the early design stages. The overall remit of this paper is to demonstrate the capabilities of engineering simulation in the proper design and deployment of hydrogen supply chain equipment and components. The authors have created templated workflows specific to Hydrogen including; hydrogen material properties; thermal and structural properties of common components such as electrolysers, compressors, fuel cells; templates for the transient refueling of hydrogen storage tanks and guidance on optimizing critical components for hydrogen production, storage, and transport. The presentation will include three case studies:Filling of an onboard Hydrogen tank looking at pressure dynamics, threshold mean line temperature, heat conductivity of tank lining and general flow patterns, temperature management and fine tuning fill dynamics (Fill time and energy efficiency).Sealing and gaskets for leakage detection, seal performance (material reliability) and minimizing material and manufacturing costs. Thermal management of fuel cells analyzing temperature distribution to optimize cooling, coolant flow dynamics to improve heat transfer and fuel cell efficiency.
Description1) Introduction to uncertainty, statistical modeling, and learning from data● Bayesian and Frequentist concepts of probability● Aleatory and epistemic uncertainty● Tools for statistical machine learning● Statistical modeling: data generating model and prior predictive simulation● Bayes’ rule as the foundation for statistical simulation ● (Machine) Learning from data and posterior predictive simulation2) Examples used● A real-life application of Bayes’ Rule● An engineering application with uncertain data3) Tools used● R● Rstudio● StanWhat will you learn?● You will learn about the basics of Uncertainty Quantification and Statistical Machine Learning starting from first principles such as concepts of probability and the application of Bayes’s Rule.● You will see how statistical simulations can be run and evaluated using freely available software such as R, Stan, and RStudio.● You will learn about application use cases of Statistical Machine Learning in industry. What questions will this course answer?● What is probability and how can I use it to assess and enhance the predictive power of simulations?● How can I use statistical modeling to learn from data? ● How can statistical simulation bridge the gap between traditional simulation and “physics-agnostic” machine learning?● How can I make predictions and decisions in the face of uncertainty? Who should attend?Everyone is welcome to attend. The seminar will be most interesting for ● Simulation engineers that would like to understand the predictive power of their models by quantifying uncertainty● Engineers that would like to broaden their understanding of potential applications of statistical machine learning in their field● Anyone interested in the practical application of physical modeling, machine learning, and data scienceThe workshop/short course is code independent. Examples will be presented with the use of the open-source statistics tools R, RStudio, and Stan.
09:20
Authored By Claus Pedersen (Dassault Systemes Deutschland)Yi Zhou (Dassault Systemes Deutschland GmbH) Christian Kremers (Dassault Systemes Deutschland GmbH) Stefan Reitzinger (Dassault Systemes Austria GmbH) Sabine Zaglmayr (Dassault Systemes Austria GmbH) Clemens Hofreither (Dassault Systemes Austria GmbH) Marcel Hoffarth (Dassault Systemes Deutschland GmbH)
AbstractMany sectors have started a transformation to electrical machines due to sustainability requirements and increased performance. Therefore, we present two non-linear multiphysics sensitivity based optimization approaches. The first optimization approach is topology optimization solution for the conceptual designing of the electrical machines and the following approach is non-parametric shape optimization for the fine tuning of electrical machines. The well-known solid isotropic material interpolation using penalization (SIMP) for the material stiffness in topology optimization is applied for the structural CAE modeling as well as for the highly non-linear constitutive material modeling for the reluctivity in the electromagnetic CAE modeling. The optimization process includes a reconstruction of the topology optimization solution which topological optimized design is then applied to shape optimization. The non-parametric shape modifications are directly performed on the CAE models where each node is displaced independently. Additionally, the CAE mesh is adapted by an automatic mesh smoothing algorithm in every optimization step. Furthermore, both in topology and shape optimization various regularization, symmetry and manufacturing constraints are enforced ensuring that the optimized designs are industrial feasible. The non-linear sensitivity based optimization does not only include the design variables for the topology or for the shape designing but also a phase angle for the injected current at a given operation point. Thereby, the design variables for the material layout of the rotor and stator are optimized simultaneously including the phase angle in each optimized iteration using a fully coupled optimization based upon sensitivities and mathematical programming. The optimization is multiphysics as both electromagnetic design responses and mechanical design responses can be optimized as objective functions and as constraints. The most common electromagnetic design responses are the average torque and torque rippling e.g., maximizing average torque and constraining the torque rippling to being below a sudden limit. The mechanical design responses are usual stiffness as objective or constraint, and strength using stress constraints. Previous published work on optimization typically applied simple mechanical modeling for the stiffness and strength, and does not include periodic boundary conditions and/or non-linear contact modeling. The present work includes these modeling features yielding accurate optimization results with respect to the mechanical properties. Additionally, another novelty of present implementation and solution compared to previous work on sensitivity-based optimization for optimization of electrical machines is that the pulsation modes of the lumped tooth forces (spatial radial / tangential components) are included as design responses. Thereby, the design responses for the lumped tooth forces for a given order can be defined as the radial, or tangential force for addressing specific NVH-properties (Noise, Vibration, and Harshness) of the electrical machine. The stator loads from the electromagnetic simulations are mapped to multi-body simulations for assessing the noise and vibration using a digital thread for the design verification where the flexible components (stator, housing, shaft, gears) for the multi-body simulations are obtained using finite element modeling.An operating point for the electrical motor is defined by the point for rotational speed and torque. Up to now the present literate has only addressed one operating point. Contrary, we address multiple operating points and thereby, multiple phase shifts (one phase shift per operating point) as design variables for rigorously improving the electric performance, since electrical motors does not always operate on one operating point. With particular operation points selection, an overall like torque ripple reduction on the whole operation range can also be expected. We will show topology optimization and non-parametric shape optimization for a PMSM (Permanent Magnet Synchronous Machine) to illustrate that the present workflow can address the numerous design requirements.
Authored & Presented By Nils Wagner (Intes)
AbstractThis paper investigates the optimization of lightweight structures utilizing Finite Element Method (FEM) techniques, with a particular focus on incorporating constraints related to eigenfrequencies and load factors derived from a linear buckling analysis. As the engineering field moves towards more effective and sustainable design practices, the need for innovative optimization strategies becomes increasingly critical. The study presents a comprehensive approach that not only aims to enhance structural performance by optimizing geometric configurations but also ensures that the designs meet specific vibrational and stability criteria. By applying various optimization algorithms, including sizing and shape optimization, the research evaluates how constraints for natural frequencies and critical loads affect the overall design process. During the optimization of lightweight structures, eigenmodes can change significantly due to parameter variations. To track these modes, the Modal Assurance Criterion (MAC) matrix is commonly computed. This matrix helps assess the correlation between different eigenmodes, indicating how similar the modes are across iterations.When calculating the MAC matrix, one can choose to reference either the eigenmodes from the baseline model or the eigenmodes from the previous iteration. Comparing the current iteration's eigenmodes with those from the last iteration provides insights into how changes in design parameters affect the vibrational and buckling characteristics of the structure. This tracking is crucial for ensuring that the optimization process maintains or improves structural performance while adapting to new design configurations. By effectively managing eigenmode variations, engineers can enhance stability and functionality in optimized lightweight structures. Case studies involving aerospace and automotive components are utilized to highlight the practical applications and benefits of these constraints in real-world scenarios. The findings demonstrate significant improvements in the structural integrity and efficiency of lightweight designs, confirming the role of FEM in advancing modern engineering solutions. This work contributes to the development of robust lightweight structures while ensuring compliance with essential performance requirements.
AbstractFor a complete CFD modeling of jet breakup processes as in atomization followed by spray propagation, a transition model from an interface tracking consideration with the volume of fluid method (VoF) to Lagrangian particles is presented. The model was integrated by the authors into the open-source CFD toolbox OpenFOAM, validated on various benchmarks, and finally used to clarify disintegration processes in molten metal atomization. Modeling approach To represent the interactions between dynamic, viscous and surface forces during the disintegration, the initial fluid dispersion is modelled using a volume-of-fluid approach. Here, adaptive meshes on the liquid-gas interface are used to geometrically resolve the lamellae and ligament structure. If the individual cell regions of disintegrated ligaments form separated and spherical structures, they are converted into Lagrangian particles. The transformation is required, because the many droplets produced during atomization are often too small for a further spray modelling using interface tracking methods. Typical criteria to identify VoF sections valid for a transition to Lagrangian particles are the sphericity, size and position of separated fluid cell regions. After the transformation the mesh is locally coarsened again and the interactions between the droplets and continuous flow are realized via Lagrangian source terms. In the following, the droplets can further disintegrate according to their respective Weber number in a secondary breakup model. The model has been integrated into OpenFOAM as cloud-functions. Thus, it can be used with both incompressible and compressible solvers, with compressible solvers being particularly suitable for twin-fluid atomization at high gas velocities. Validation For validation the fuel jet in cross flow benchmark [1] is set up. A LES model is used to represent the turbulence structure and an iso-advector interface tracking algorithm is applied [2]. The horizontally flowing air atomizes the liquid whereby the final drop sizes at a greater distance are dominated by the secondary break-up of the Lagrange droplets according to the Reitz-Diwakar model. Good agreement with experimental results is achieved for the resulting particle size in the control planes at different distances from the injection nozzle. Another application presented is pressure atomisation from a round nozzle [3]. For this type of atomisation, which is dominated by primary break-up, a good match is also achieved for both droplet sizes and velocities. Application The model is then applied to twin-fluid atomisation of water and metal melts [4]. In this process, a swirl nozzle creates a lamella that breaks into ligaments and is atomized into small particles by high velocity gas jets. References [1] Sekar J, et al, Liquid jet in cross flow modeling. In Proceedings of ASME turbo expo 2014: turbine technical conference and exposition. Düsseldorf, Germany; 2014. [2] Roenby J, Bredmose H, Jasak H. 2016 A computational method for sharp interface advection. R. Soc. open sci. 3: 160405. http://dx.doi.org/10.1098/rsos.160405 [3] Deux, E. Berechnung der turbulenten Zerstäubung von Flüssigkeiten durch Kombination eines Zweifluidmodells mit dem Euler-Lagrange-Ansatz, Dissertation Halle-Wittenberg, 2006 [4] Kamenov, D., et al., Investigating the Atomizer Performance within Aluminium Melt Atomization. The European Conference on Liquid Atomization & Spray Systems (ILASS), 2022
Presented By Thomas Moshammer (Siemens Mobility Austria GmbH)
Authored By Thomas Moshammer (Siemens Mobility Austria GmbH)Alexander Prix (Siemens Mobility)
AbstractAir resistance is an important factor in the speed performance of high-speed trains. Air resistance increases quadratically with speed, so that at higher speeds the influence of air flow becomes greater and greater. The design of high-speed trains is therefore focused on minimizing air resistance to maximize efficiency.Using modern calculation methods, the main aerodynamic losses of a train have been analyzed and possible optimization approaches were developed. In addition to the primary losses, the overall drag of the train was reduced through suitable shaping. The bogies play a decisive role here with a total share of approx. 25% on the total air resistance of a high-speed train. In high-speed trains bogies do not have any aerodynamic panels now. For the next generation high speed train of Siemens - the Velaro Novo - Siemens wanted to develop the most efficient train possible. Therefor aerodynamic panels for bogies are essential!When developing an aerodynamic housing for high-speed bogies on the one side of course the aerodynamic shape is important to reduce drag. On the other hand, is important to look on the thermal effects of the housing on components. It will be shown in the presentation what steps have been done to develop the optimal aerodynamic shape including side wind and other effects. Here from a model of the whole train boundary conditions are taken to a very detailed model for the shape optimization on only a part model of the train to have more loops in shorter time for the shape optimization. One crucial condition in the development of the housing is that it has to work in both directions of the train. Plenty of simulation have been done to get the best solution.For the thermal effect of the housing a different simulation model has been built to check thermal effects. In-depth thermal simulations play a significant role in ensuring the optimal performance of high-speed trains. These simulations involve various mechanisms of heat transfer, including natural convection, radiation, convection, and conduction. One critical use case in this respect is the emergency braking of high-speed trains with a full housing of bogies and then standing after braking. Here natural convection, radiation and other simulation topics must be solved which are not so easy. To check what happens a thermal model was set up with the help of CFD including all form of heat transfer (conduction, convection, radiation). All material characteristics, all emission and absorption coefficients have to be defined and of course inducing heat in the brakes have to be simulated. Real times needed are 50 minutes and are a challenge for simulation. At the end results show which components need to be focused on when talking about thermal stresses.The synergy between aerodynamics, CFD, and thermal simulation is essential in the development of high-speed trains. These technologies enable engineers to design trains that not only travel at remarkable speeds but also provide a safe, comfortable, and efficient transportation solution. As we look to the future, continued innovation in these fields will drive the next generation of high-speed rail systems, bringing us faster and more sustainable travel options.
Authored & Presented By Markus Wagner (Technische Universität Graz)
AbstractThe adoption of wood-based laminates in high-performance applications is often hindered by their limited fracture toughness. While wood inherently is very strong along its grain (growth direction), it fractures in a brittle fashion, making it less suitable for absorbing energy. To address this, we propose a novel strengthening approach utilizing a stitching process adapted from technical textiles. This method employs industrial heavy-duty sewing machines to introduce through-thickness reinforcement, akin to tufting. However, while tufting relies on friction between the thread and the base material to improve the mechanical performance, stitching utilizes a second thread in order to prevent slippage and therefore further improves the mechanical properties, especially when the laminate is already fractured or delaminated. Such enhancements are particularly critical for wood-based laminates in safety related applications.To support this approach, we developed and validated simulation models for a double cantilever beam (DCB) and end notch flexure (ENF) configurations against experimental data. These numerical models represent the laminate structure as discrete orthotropic wood layers bonded by cohesive elements mimicking the adhesive. Importantly, the models account for the damage introduced by the stitching process, such as needle-induced perforations. These models are then used to evaluate the balance between this damage and the reinforcement provided by the stitching thread.A numerical parameter study was conducted to optimize stitching parameters, such as thread type, stitch density, and other sewing patterns. These studies identified configurations that maximize fracture toughness while minimizing damage, offering insights into the trade-offs involved. Additionally, simulations explored the potential benefits of oblique stitching, which improves mode II fracture toughness but may reduce mode I benefits.Further analyses included compression-after-impact (CAI) simulations, demonstrating that stitching significantly enhances structural integrity after damage. These results suggest that stitching-reinforced laminates could be viable for high-performance applications such as wooden aircraft structures and crash-resistant components in vehicles. Beyond the mechanical benefits, this stitching method supports the use of renewable, bio-based materials in industries traditionally dominated by synthetic composites. Our findings indicate that this stitching process can substantially enhance the fracture toughness of wood-based laminates, paving the way for their use in applications previously deemed impractical. This work underscores the potential of numerical modeling as a powerful tool for optimizing advanced material design and validating innovative reinforcement techniques.
Presented By Michael Probst (CAIQ GmbH)
Authored By Michael Probst (CAIQ GmbH)Bernd Lindner (CAIQ GmbH, Germany) Klemens Rother (Munich University of Applied Sciences) Markus Gruenert (Beventum GmbH)
AbstractThe issues in the field of energy generation, sustainability and resource consumption are omnipresent. For wind turbines in particular, sustainable and robust lightweight construction is both a must and a challenge. New lightweight construction approaches can be realised using a new manufacturing process called NECTO© structures. Conventional materials such as CFRP or GFRP involve high disposal costs and environmental problems during disposal. NECTO© structures are punched sheets embossed in the spatial axis, which are completed with a cover layer to form ultra-light and very rigid panel systems, monolithic or joined from different materials. The production method works with steel as well as aluminium and therefore offers excellent opportunities to make lightweight construction particularly sustainable and to overcome many challenges for sandwich shells, e.g. the multi-axis curvature of the shell or variable wall thicknesses. Despite the good stiffness and strength values, the weight of the core easily fulfils the requirements of ultra-lightweight design. For the industrial utilisation of this technology, however, it is essential that a simulative possibility of behaviour and stability prediction is available and verified. This applies in particular to large structures such as wind turbines, where real tests are virtually unaffordable and entail corresponding risks. The requirements and the innovative lightweight construction approach will be presented in the lecture. The type of structural design will be considered in depth, with a particular focus on load-bearing capacity. Load-bearing structures such as towers or rotor blades require a very robust virtual design, for which there are still no common methods for that kind of structures. The presentation will introduce in the method, the consideration of micro-, meso- and meta-modelling and the comparison with real tests form the basis for the use of the new lightweight construction technology. Finally, the current status of simulation technology and the outlook for further possible areas of application are presented.
BiographyCAIQ GmbH, Head of Business Development 1991: University degree "Diplomingenieur" at the Technical University Munich, Germany 1991 - Sept. 1996 Employed as simulation engineer Oct. 1996 - 2020 Self-employed as managing director / board member of ISKO engineers AG since August 2020 employed by CAIQ GmbH as Head of Business Development
Presented By Athanasios Fassas (BETA CAE Systems)
Authored By Athanasios Fassas (BETA CAE Systems)Georgios Mokios (BETA CAE Systems SA, Greece)
AbstractOver the past few years, VMAP has supported the storage and exchange of various results, enabling seamless multi-disciplinary workflows. In such workflows, where multiple software solutions are utilized, VMAP ensures that results can be shared and processed without regression.Having been part of this project from the very beginning, we have integrated the support of the VMAP format in our products, making it readily available to our users. In this presentation, we will explore a new approach for handling these results, allowing users to combine the power of the META post-processor to evaluate extracted results and save them in a format compatible with ANSA pre-processor for further simulations.The final step toward this goal is to implement full support for the VMAP format in our META post-processor. With META, users can unlock a broad range of capabilities and introduce new functionalities for working with VMAP files. Equipped with various 2D plots, isofunctions, and an extensive array of toolbars, users can efficiently visualize and analyze the results.Going a step further, users can create new results by combining existing ones. The built-in reporting functionality in META facilitates the exchange of critical information among users, while the automation capabilities streamline the entire post-processing workflow.Once the post-processing is complete, users can save the necessary results and import them into the ANSA pre-processor, ready to set up the next analysis.
Presented By Marc Hazenbiler (Test-Fuchs)
Authored By Marc Hazenbiler (Test-Fuchs)Antoine Delacourt (Siemens Industry Software SAS) Matthieu Ponchant (Siemens Industry Software SAS)
AbstractOn the road to aircraft decarbonization, new propulsion concepts are designed and virtually assessed. One possibility is using carbon-free fuels like hydrogen, which is currently experiencing a lot of popularity, in part due to the European research program Clean Aviation. Within the European-founded project NEWBORN, a consortium led by Honeywell International s.r.o is working on a Fuel cell (FC) system demonstrator with several partners. In parallel to the real demonstrator, a Digital Twin (DT) is necessary to support design activities from the preliminary stage up until final control validation.A key challenge is the management of hydrogen in a complex and harsh environment. The fuel must be maintained in the liquid phase in a cryogenic tank and provided to the fuel cell anode in a gaseous phase at the correct pressure and temperature. Leakages need to be avoided within the whole system to prevent the buildup of explosive atmospheres. Special controls and actuators are necessary to ensure the safety and reliability of the system. Heat fluxes into the tank need to be minimized to reduce vaporization, and therefore hydrogen losses. This is accomplished by encompassing all necessary peripheral equipment (e.g. valves, sensors, heaters for preconditioning) in two equipment bays, the so-called “cold box” and “hot box”, that are attached to the tank and isolated from the environment.Every component that is part of the system is modeled within Simcenter Amesim at different levels of fidelity to analyse its performance. All these components make up the H2 line and are depicted within the digital twin, including a hydrogen recirculation loop for the fuel cell with its associated model, to allow faster control calibration and some failure mode analysis. Thanks to this simulation work, component design is enhanced and allows easier component integration within the complete system, especially in the context of short planning execution as is necessary in the NEWBORN project. The development of such a subsystem model supports the understanding of complex hydrogen physics as well as the interaction with other subsystems at an aircraft-system level to create a fully completed and validated fuel cell system digital twin. Another key element is the possibility of assessing various configurations for the hydrogen line and selecting the one that fulfills all requirements while delivering the best performance.The simulation model will be then validated with test measurements, once preliminary components, and later the integrated system, are completed. The objective is to ensure confidence in the fuel cell digital twin. The digital twin will then be run in parallel with the final demonstrator to showcase and highlight the complementarity of both approaches, before going further in a real flying demonstrator, which could be the next step in this research activity. In conclusion, using a digital twin during the development of new technologies, such as the hydrogen system presented in this work, can help to gain a better understanding the processes and interactions within the system and therefore find better designs and solutions for the tasks at hand, while simultaneously accelerating the development process.
BiographyMarc Hazenbiler is a young talent with a strong interest in pushing the aerospace sector towards a greener future. Currently, he is part of the Hydrogen and Space Engineering Team at TEST-FUCHS Aerospace Systems, developing valves for cryogenic hydrogen applications, paving the way for more sustainable aircraft. Marc holds a Master's degree in Aerospace Engineering from the University of Stuttgart, where he also took the opportunity to gain hands-on experience by participating in the university's rocketry team. In this team he also discovered his passion for fluid systems, valves, and simulation, eventually taking on the role of Head of Fluid Systems. During his internships and while writing his Master's thesis, Marc expanded his experience in the field of simulation, further fueling his interest in this area. At the congress, he will provide insights into the component and system simulations of the cryogenic hydrogen supply system performed within the EU-funded NEWBORN project.
09:40
Authored & Presented By Daniel Ulrich (Universität Stuttgart IKTD)
AbstractTopology optimization (TO) is a critical technique in engineering design for creating lightweight structures with optimal stiffness and strength. However, the high computational cost of traditional TO methods limits their use in early design stages where rapid iteration and flexibility are essential. Recent advances in machine learning (ML), particularly deep learning, offer potential solutions to these challenges. In particular, the integration of generative models with TO can efficiently produce high-fidelity designs.In this study, we adapt a generative ML framework from 3D computer vision to structural design applications. Our approach uses an implicit shape representation conditioned by a code vector, allowing a single network to represent a wide range of shapes and interpolate smoothly between them. The primary goal is to develop a generative auto-decoder network that uses model boundary conditions as input parameters to generate new, accurate design proposals.To construct the training dataset, we established a parameterized finite element analysis (FEA) model, including a unit force with variable position and orientation as the load, along with adjustable support points. A parameterized volume constraint accounts for different load levels. Random samples were generated using the Latin Hypercube method to ensure uniform coverage of the parameter space. The TO was performed with a combined objective function aiming for both high stiffness and low stress concentration. Model evaluation involved comparing generated shapes with unseen test cases, focusing on accuracy and stress performance through simulation.The results show that the trained generative model effectively produces structurally optimized designs with high accuracy and low memory requirements. This capability makes it suitable for the rapid generation and validation of early-stage design concepts for lightweight components. In addition, the model's ability to adapt to changing boundary conditions in near real-time indicates its potential for applications requiring rapid design iteration, such as instantaneous TO and conceptual design workflows.By integrating generative models with TO, this work provides a practical approach to reducing computational costs and increasing design flexibility in the early stages of engineering design. This is in line with current advances in ML-based TO methods and contributes to improving the efficiency of engineering design practices.
Presented By Emre Baris Yildiz (Universität Stuttgart IKTD)
Authored By Emre Baris Yildiz (Universität Stuttgart IKTD)Matthias Kreimeyer (Institute for Engineering Design and Industrial Design, University of Stuttgart)
AbstractAs the transportation industry increasingly adopts electrification, efforts to enhance the power density of electric motors have gained importance. This focus stems from the need for more compact, efficient systems. Advances in electromagnetics, including innovative materials and refined designs, have significantly improved power density. Technologies such as high-performance magnets, reluctance-effect materials, advanced winding methods, and ultra-thin laminates enable engineers to achieve greater output in smaller motors. While these advancements enhance electromagnetic performance, they simultaneously increase the need for robust mechanical power transmission. A critical factor is the shaft-hub connection within the rotor assembly, which governs torque transmission, particularly at high rotational speeds. Rotors are typically assembled using interference fits, which ensure precise alignment, balance, and gap-free assembly, resulting in smooth, vibration-free operation with minimal energy losses. However, at high rotational speeds, this method suffers from reduced torque transmission due to centrifugal forces and thermal strain. To address these limitations, friction fitting with higher interference levels has been proposed, increasing joint pressure between the rotor core and the shaft.Extensive research has been conducted to evaluate the feasibility of shaft-hub connections with substantial interference. However, most studies focus on solid shafts and hubs, failing to account for the geometric complexities of modern electric motors with laminated rotor cores. The iron cores of contemporary rotor assemblies are not solid but consist of individual, electrically insulated laminations designed to minimize eddy currents—circular currents induced in conductive materials by changing magnetic fields. These laminations, typically coated with an insulating layer, interrupt the flow of eddy currents, significantly reducing energy losses and heat generation, thereby improving motor efficiency. Lamination stacks are composed of extremely thin electrical steel strips, typically ranging from 0.1 to 1 mm in thickness. One common manufacturing technique for producing laminated cores is interlocking, a cost-effective and efficient process. This method involves punching thin sheets of electrical steel into precise shapes and bundling them into stacks, where embossed areas on the laminations are pressed together under axial force to form an interference fit between the laminations. Interlocking offers high production speeds, reduced material waste, and consistent mechanical and electromagnetic performance. However, experimental and numerical investigations revealed that interlocked laminated cores are not ideal for high-interference applications. Under such conditions, these cores exhibit nonlinear deformation, or buckling, due to elevated joint pressures. This elastic buckling leads to delamination of the rotor core, negatively affecting its electromagnetic properties and increasing rotor shaft imbalance. Furthermore, the buckling deformation significantly alters the joint pressure distribution at higher interference levels, reducing the mechanical performance and torque transmission capacity of the connection. To address these challenges, a robust and precise numerical framework was developed based on experimental results and material tests. Numerical simulations were conducted to analyse the deformation behaviour of laminated cores under high interference conditions and to identify the key parameters influencing buckling. The results revealed critical factors governing the buckling behaviour of laminated cores. Using these insights, the geometry of the electrical steel laminations was numerically optimized to enhance their buckling resistance under high joint pressures. These optimized designs enable the rotor cores to withstand higher interference pressures, facilitating the transmission of greater torque.These optimizations represent a significant step toward improving the mechanical stability of laminated rotor cores in high-interference-fit applications, supporting higher torque transmission and overall performance. Future research could focus on experimentally validating the improved stability and torque transmissibility of the numerically optimized rotor core geometry under dynamic operating conditions, in order to assess the practical feasibility and effectiveness of the proposed optimizations.
Presented By Erika Quaranta (Former Student Contacts)
Authored By Erika Quaranta (Former Student Contacts)Malcolm Smith (ISVR Consulting)
AbstractThis paper describes numerical analysis of unsteady flow phenomena in the metering system of a gas production platform. The unsteady flow had previously been identified as the root cause of a vibration issue, which can lead to fatigue failures of attached small-bore pipes, and had constrained the operating conditions of the platform for many years. The highest level of vibrations were linked to a resonance peak in the unsteady pressure between 33 and 35 Hz, with the resonance frequency varying with flow velocity and other flow conditions.Various mitigation measures and operational recommendations were applied to avoid structural damage, but to remove the flow instabilities in the inlet and outlet header of the metering system and fully resolve the problem, a substantial and expensive redesign of the pipeline was required.In order to support such extensive task, ISVR Consulting has used numerical CFD modelling to simulate the flow through the different parts of the system to identify the main excitation mechanism based on unsteady flow instability, linked to acoustic feedback and the acoustic modes of the metering trains, and correlated to the various flow regime and velocity. The key feedback mechanism in the inlet header was identified as a so-called Rossiter Tone, in which vortex shedding from a bifurcation into two metering lines caused aeroacoustic feedback from features further downstream. Surprisingly, when the flow from the two lines merged again at the outlet header this was also found to be a source of instability generating another Rossiter tone at a similar frequency.For this study, we have chosen to use OpenFOAM, a well-developed open-source software, with the potentiality to solve both unsteady CFD and acoustics, because, despite all the limitations of applying direct methods in aeroacoustics, we were mainly interested in understanding how the flow vorticity and pulsating pressure were linked in a specific low frequency tonal problem. Results have provided the basis for an in-depth understanding and localization of the aeroacoustic mechanism in the different areas of the pipeline and have supported recommendations for an efficient redesign of the entire system to control the problem at source.
Presented By Nina Moello (pSeven)
Authored By Nina Moello (pSeven)George Biryukov (PSEVEN SAS)
AbstractThe paper presents the methodology for investigating wing geometry within the late stages of aircraft design, addressing two common problems: optimization of the geometry to improve its aerodynamic characteristics and iterative search of the deformed shape to refine the values of external loads.Both problems involve numerous identical computational fluid dynamics (CFD) or finite element analysis (FEA) simulations, which traditionally require manual preparation at each iterative step. To exclude the human from the loop, the solution of the problems is implemented in the form of automated workflows where engineers only need to submit initial files in a predefined format and review the final results. Such approach allows to easily repeat the study in case of changes in the problem statement, minimizes the human error factor by automating the data flow and permits the persons who have less experience with the simulation tools to obtain the same results. The first challenge focuses on minimizing drag force while maintaining lift force by adjusting the twist distribution along the wing. Such study in the form of optimization workflow may be performed in various statements, including the multiple flight regimes combined. The second one addresses the iterative interaction between aerodynamic and structural analyses. In this case, the wing is considered deformable, meaning that the force factors obtained from aerodynamic simulation can alter wing geometry. Structural analysis is used to estimate these deformations. But, since geometry has changed, aerodynamic analysis must be rerun to correctly determine the forces acting on the wing. As this process is nonlinear, it is repeated iteratively until the changes between steps become negligible.Since the second problem involves multidisciplinary analysis, including both structural and fluid dynamics, it typically requires collaboration among multiple specialists. This paper also explores how to address the management of simulation processes and data transfer challenges between different departments involved in such analyses.The approaches described in this paper can be adapted to various geometry parameterization techniques, load scenarios, and simulation software.
Presented By Daniel Trost (Key to Metals)
Authored By Daniel Trost (Key to Metals)Daniel Trost (Total Materia AG)
AbstractThe importance of accurate material properties information for engineering calculations and simulations, such as CAE (Computer Aided Engineering) and FEA (Finite Element Analysis), can never be overstated. Conventional mechanical properties such as yield strength, tensile strength, hardness, and ductility may vary more than tenfold for structural steels at room temperature, depending on the variations of alloying elements, heat treatment and fabrication. With even a moderate change in working temperature, the property’s variations and changes can become even more profound and their approximation using the typical property values for some groups of alloys may lead to very serious errors.While large material databases and material selection software can help with these challenges in engineering and simulation, it is unfortunately technically impossible to have all properties for all materials readily available from experiment and standards. Recent developments in artificial intelligence and machine learning however provide an opportunity to overcome this gap.This paper presents a machine learning system aimed at predicting material properties of a wide range of diversified materials, such as stainless steels, aluminums, coppers, refractory alloys and polymers. By using copious training sets provided from a very large database and proprietary methodology for taxonomy, data curation and normalization, the developed system is able to predict physical and mechanical properties for hundreds of thousands of materials, on various temperatures and various heat treatments and delivering conditions. The accuracy achieved in the terms of relative error is in most cases above 90%, and frequently above 95%, thus being clearly higher than MMPDS B-Basis values, which are used in aerospace industry.
Authored & Presented By Vangelis Palaiokastritis (BETA CAE Systems)
AbstractMaterial innovation is critical across industries, particularly for the design of lightweight and sustainable products. Accurately capturing the complex behavior of composites remains a significant challenge in engineering design and analysis. This challenge highlights the importance of multiscale modeling techniques, which provide a comprehensive understanding of material response across various length scales. Efficient and user-friendly CAE tools are essential for analysts to develop high-fidelity simulation models.This work outlines a framework that enables multiscale modeling of composites within a multidisciplinary environment. The process begins at the microscale, where an RVE (Representative Volume Element) Generator is used for virtual material characterization. Various types of microstructures, including Unidirectional, Woven, Honeycomb, and Short Fiber Reinforced, can be generated and analyzed using any structural FEA solver. This analysis yields homogenized composite properties, which are then used to create solver material cards for subsequent macroscale analysis.In the second step, the microscale data are utilized to create a laminated model of a solid composite component. After the meshing process, the efficient build-up and inspection of the composite lamination are carried out to. Once the laminated model is complete, it can be adapted for various analyses, with boundary conditions and all required analysis parameters set for multiple load cases. Finally, the composite model is prepared, and the FEA simulations are submitted and monitored.Post-processing results from laminated component analyses can be complex, as tensor results and material history variables exist through the thickness. To address this, a new toolset for post-processing composite results has been introduced. This toolset facilitates a range of actions, from massive results reading and materials handling to identifying critical areas, generating through-the-thickness plots, and recording post-processing actions.In summary, this integrated approach streamlines the entire workflow—from material characterization to final post-processing. It empowers CAE engineers to efficiently design and analyze advanced composite structures, ultimately meeting the demands for innovation and sustainability in engineering.
Presented By Priyanka Gulati (Fraunhofer SCAI)
Authored By Priyanka Gulati (Fraunhofer SCAI)Klaus Wolf (Fraunhofer SCAI) Andre Oeckerath (Fraunhofer SCAI)
AbstractIntroduction / Working GroupsPresentations:- Enhancing Simulation Workflows – Leveraging VMAP, META, and ANSA for Optimized Results Handling, presented by BETA CAE Systems- Advanced Laser-Based Manufacturing – Multiphysics Modelling and Interoperability with VMAP Standard presented by AerobaseDemands from Standardisation for Automotive Industry presented by Volkswagen- DiscussionOverview:Engineers face a growing number of software tools and experimental testing data through-out their product design, optimization, and validation workflows. The VMAP standard, led by the VMAP Standards Community, aims to standardize data exchange in computer-aided development processes. VMAP is a vendor-neutral standard for CAx data storage that enhances interoperability in virtual and physical engineering workflows. It enables data transfer between simulation codes as well as storage of data from physical experiments or ma-chine monitoring, filling a gap in the industry. VMAP also provides input/output (I/O) routines for easy implementation.The VMAP specification is based on the widely accepted HDF5 API. The VMAP I/O library includes bindings for most programming and scripting languages. VMAP I/OLib v1.2.0 implements the VMAP standard for simulation and measurement data. It covers the storage of geometrical data, input and output variables, unit systems, metadata, and material data. The measurement data group stores information about measured objects and the measuring setup.This provides a high-level overview of the VMAP structure, which aims to store almost all the required information for simulation and measurement data, addressing the industry's need for standardized data exchange.VMAP SC working groups for sensor data, full model and wrapper development are consistently working on the further extension of the standard in multiple domains.VMAP Sensor & Experimental data working group aims to standardize the storage of measured and experimental data within the manufacturing industry. A clear application from the blow moulding domain, where the stereography and thermography data need to be incorporated into the validation process along with the simulation data. This use case needs a standard format to store both test and simulation data, to carryout validation process without any loss of information and VMAP Standard is being extended to support such use cases. The focus is also to store monitoring data from various manufacturing processes, like Wire Arc welding process. In WAAM, robots, sensors & cameras are being operated to collect the data throughout the process cycle. This data needs to be synchronized and stored in a standard format for visualization and machine learning models. VMAP is being extended to cover this domain as well.VMAP Complete Model Storage working group aims to store boundary conditions in the VMAP file so that, in the future, a simulation can be directly initiated using the VMAP file. One of the applications from this working group requires storage of numerical data, like boundary conditions, in the VMAP Standard. The jet engine design requires coupling of various tool, standardized data exchange and support for multi-fidelity data. Integration of all data components into the VMAP Standard will provide, such complex use cases, a comprehensive data exchange format. The group is actively working to find the best fit method of storing the boundary conditions.VMAP Wrapper working group was started recently and we are focussing on updating the existing VMAP Wrappers for OpenFoam and Abaqus based on the current demands from our use cases. Furthermore, there is a huge interest from project partners to build VMAP Wrapper for MSC Marc and Flowphys. The aim is to have a standardized data exchange and interoperability with various Multiphysics software. This will allow for efficient data management and hence comprehensive data structures which can be easily labelled and then used for further analysis. The bidirectional flow of data will help the engineers to handle multiple softwares and the workflow chain without putting in extra human hours for translations and transfer.VMAP SC will host a workshop to demonstrate the broad applicability of the VMAP standard by means of specific examples from ongoing R&D projects and to include further requirements and suggestions for the standard in discussions.
Presented By Barbara Neuhierl (Siemens Digital Industries Software)
Authored By Barbara Neuhierl (Siemens Digital Industries Software)Carmine Marra (Universite degli Studi di Modena e Reggio Emilia, Italy) Alessio Barbato (Siemens Industry Software Srl) Alexandros Panagoulias (Siemens Industry Software GmbH) David Mann (Siemens Industry Software GmbH) Alessandro d& amp amp amp amp , 39 Adamo (Universite degli Studi di Modena e Reggio Emilia, Italy)
AbstractThe higher-than-ever need to decarbonize energy intensive sectors and to increase their efficiency is motivating a shift from carbon-rich fossil fuels used in low-efficiency combustion devices (e.g. engines, turbines) to non-carbon fuels (e.g., hydrogen) electrochemically converted in high-efficiency fuel cells systems. The fuel cells technology extends the decarbonization possibilities of battery-based systems to generate power where the energy requirement is presently higher than the limited energy density of batteries, e.g. heavy-duty transportation or medium-range aviation.Multi-dimensional simulation techniques as Computational Fluid Dynamics (CFD) is one of the main ways costs can be lowered by decreasing reliance on physical prototypes and resources those require, shifting the development pathway to digital prototypes.In this context, the industrial development of fuel cells systems has focused on the Polymeric Electrolyte Membrane Fuel Cell (PEMFC) type, due to its higher power density, low-temperature operation, fast response time, inherent modularity, and absence of corrosive liquid electrolyte with respect to other fuel cell types. The key for an optimal PEMFC operation is to obtain a high and uniform thermal-hydration state of the electrolytic membrane during its operation, necessary for the maximization of the rate-limiting ionic conductivity. Hence the water transport through the porous media (Gas Diffusion Layers and Catalyst Layers, GDL and CL, respectively) is not only crucial to optimize the cell’s operation, but also highly complex to model in view of the multi-phase nature of the fluid and of the ubiquitous presence of solid walls and small pores.In this study, two Eulerian multi-phase models are implemented in Simcenter STAR-CCM+ and compared in the unique framework of PEMFCs, namely the mixture multi-phase (M2) and the two-fluid (TF) model. A reference case from literature is simulated to compare both multi-phase models, critically discussing the impact of the hypotheses of both models on the cell operation, as well as those of a Eulerian modelling approach in this context. The results provide guidelines for the modelling of multi-phase water transport in PEMFC porous media, contributing to the advancement of the simulation techniques for the power generation and propulsion industries.
10:00
Authored & Presented By Namwoo Kang (Narnia Labs)
Abstract“How can we develop better products faster?” is a critical question confronting manufacturing industries worldwide. Traditional development processes, reliant on repetitive and human‐driven design and analysis, often increase costs and prolong time‐to‐market. To address these challenges, this research presents Deep Generative Design, a paradigm shift from conventional simulation‐based design to an AI‐driven methodology.The proposed framework is built upon three core AI technologies. First, Generative AI automatically produces large sets of novel design concepts through 3D deep learning. By leveraging historical data while ensuring engineering feasibility, it provides designers and engineers with a broad range of innovative ideas, thereby inspiring fresh solutions. Second, Predictive AI employs 3D deep learning–based CAE/CAM techniques to evaluate each design’s engineering performance, manufacturability, production cost, and novelty in real time. This significantly reduces the cost and duration of analysis, helping to minimize the sequential and iterative cycles commonly required for design, simulation, and test. Third, Optimization AI applies deep learning to identify optimal design solutions under specified performance targets and constraints. By simply inputting desired performance metrics, engineers can generate the best‐fitting solutions in real time, as well as compare trade‐offs across multiple performance indicators.By securing product‐specific big data and AI solutions, companies can accelerate digital transformation and efficiently establish data standards for future use. Replacing repetitive tasks in CAE and optimization with AI dramatically shortens development cycles, lowers production costs, and enables engineering teams to focus on more creative, value‐adding activities. In addition, AI autonomously generates, evaluates, and optimizes designs, offering significant market opportunities for innovation. Finally, the framework reduces dependence on domain experts by allowing multiple users to access and share the latest AI models via cloud‐based platforms. Consequently, anyone can perform end‐to‐end engineering tasks—including design, analysis, and optimization—without specialized assistance, ultimately paving the way for faster, more cost‐effective, and higher‐quality product development.This integrated Deep Generative Design framework has been validated across a broad range of industries—including mobility, electronics, heavy industries, and robotics—demonstrating its robust applicability. In this presentation, we focus on several mobility‐related examples, specifically in automotive wheel and brake design generation and vehicle crash performance prediction. First, we introduce an AI system that, given a user‐specified reference image and style via text input, can generate a multitude of 3D wheel designs for vehicles. It then predicts each design’s stiffness and strength, recommending the most promising solutions to designers. Second, for the brake caliper, we describe an AI approach that learns from a limited set of design samples to produce large‐scale synthetic data, which in turn is used to accurately predict stiffness. Finally, we address vehicle crash scenarios: by training on video test results, the AI can predict outcomes of small‐overlap and side‐pole impact tests using only the vehicle’s geometric information, as well as propose optimized design solutions. These examples illustrate how the Deep Generative Design framework can be seamlessly applied to real‐world engineering challenges, underscoring its potential to transform the entire product development process.
BiographyNamwoo Kang is an associate professor of Graduate School of Mobility at KAIST. He is also currently CEO of Narnia Labs. Prior to joining KAIST, he served as a Research Fellow in the Department of Mechanical Engineering at the University of Michigan and worked as a Research Engineer at Hyundai Motor Company.He has earned his Ph.D. in Design Science at the University of Michigan. Previously, he obtained a M.S. degree in Technology and Management and B.S. in Mechanical and Aerospace Engineering from Seoul National University. He has been pursuing Generative AI-driven Product Design research by integrating physics and data for virtual product development. His research interests include generative design, data-driven design, machine learning, deep learning, design optimization, topology optimization, CAD/CAM/CAE, and HCI.
Authored By Michael Klein (INTES GmbH)Michael Klein (INTES GmbH)
AbstractIf an industrial design buckles during operation, which is in most cases an undesirable behavior, there is for sure a good advice needed on how to avoid the buckling phenomenon, if possible. Of course, the CAE guys should provide this advice, which indeed can be provided in linear buckling cases by appropriate optimization techniques. The situation becomes much worse in case of nonlinear buckling, where contact forces and geometrically nonlinear displacements and even plasticity have to be taken into account. No optimization method is currently available, which works for the objective function ‘no buckling’. It is the purpose of this paper to describe how one can get a modified design which does not buckle under the given loading. This can be understood as a first step to optimization. Whether buckling happens or not during a nonlinear static analysis cannot be known in advance. If buckling happens, one could even get a final result after buckling, but this does not give any hint on how to modify the design to avoid buckling. Classical optimization methods start from a result and ask, how the design has to be changed to modify one or several results of the analysis. Even, if one finds a way to overcome the nonlinear buckling by optimization, one has to be aware of the fact, that this modified design could show another buckling case at another load level. This could cause another optimization of this buckling case. The number of necessary optimizations to get a non-buckling design under the given loading could not be predicted.One important reason for this situation is contact, because any design change may directly change the contact forces, which trigger the possible occurrence of buckling. So, if nonlinear static analysis under contact leads to buckling, we have to find a useful strategy to get a non-buckling design under the given loading. Here, the basic idea is to use the occurring buckling mode shapes to derive a design modification. The buckling modes are not only easy to compute, but they will also give the chance to repeat the modification of the design with any number of sequent buckling cases. This would open a way to find a modified non-buckling design by one single process and one single job.This paper will demonstrate the process using an industrial use case. In addition, this process will be performed for both elastic and plastic material properties to show the difference between both cases. All computations and result evaluations are performed using the industrial CAE code PERMAS.
Presented By Alejandro Martinez Navarro (Dassault Systemes)
Authored By Alejandro Martinez Navarro (Dassault Systemes)Richard Shock (Dassault Systemes) Guido Parenti (Dassault Systemes)
AbstractAs the automotive industry increasingly shifts toward electrification, reducing vehicle drag becomes crucial for enhancing battery range and meeting consumer expectations. Additionally, recent European regulations require car manufacturers to provide reliable drag data for vehicles as they are configured (e.g. WLTP, GHG Phase 2). Among the factors influencing vehicle drag, the interaction between tire wakes and the overall vehicle aerodynamics is critical. To improve the performance of designs, it is essential to identify which tire features have a stronger influence on the flow field. While physical testing enables manufacturers to evaluate the aerodynamic effects of tires, isolating the specific impacts of contact patch and bulge deformations is challenging because these features evolve together under load. Computational Fluid Dynamics (CFD) simulations offer a unique capability to decouple and independently analyse these deformation effects.This study investigates the aerodynamic impact of realistic tire deformation parameters, focusing on bulge and contact patch deformations, using Computational Fluid Dynamics (CFD) simulations performed using the Lattice-Boltzmann method. The simulation setup, using a standalone tire model, was validated against experimental results in prior research. Given the complexity of the flow structures in the tire wake, a vortex identification algorithm based on the Gamma-2 criterion, combined with a single-link clustering method, was employed to examine vortex behaviour and downstream wake development. The unsteady nature of the wake required a transient analysis to better understand the influence of deformation parameters on vortex shedding dynamics.The results identify the deformation parameters that most significantly influence the flow field and classify them into two primary groups based on their effects on wake evolution: those that induce wake contraction and those that promote wake expansion. The analysis of the vortex behaviour shows consistent trends that characterize both the contraction and the expansion of the wake, while transient analysis highlighted how these features influence unsteady vortex shedding and its implications for wake width development.
Presented By Tom Elson (Element Digital Engineering)
Authored By Tom Elson (Element Digital Engineering)Steve Summerhayes (Element Digital Engineering)
AbstractThe rapid decarbonization of industry and transport is a central challenge to the transition to a more competitive and greener economy. Hydrogen is seen by many as an energy vector with potential to decarbonize industries such as aerospace and heavy goods transport which cannot be easily electrified. In these sectors which need a higher energy density than is available from existing battery technology, hydrogen is likely to play a significant role in the decarbonization strategy. Whereas gaseous hydrogen in high pressure storage tanks is a feasible solution for ground-based and water-based transport, the associated weight penalty of high pressure tanks makes it less suited for the aerospace industry, where liquid Hydrogen is the preferred alternative.In order to utilize the hydrogen as fuel, either by producing electricity in fuel cells, or otherwise burning it in gas turbines, it is required for it to be evaporated and then brought up in temperature. This requirement is derived for a number of reasons, including, safety, integrity, and efficiency of the propulsion system. One option is to utilize excess produced by the powerplant and through a thermal management system redirect that heat to evaporate the LH2. This approach has, in the past, been used in traditional hydrocarbon-fueled aerospace propulsion systems.The Clean Aviation NEWBORN program has been awarded to develop a megawatt propulsion system with hydrogen as its energy source and develop it to TRL level 4. As part of this program, the consortium are developing the thermal management system which utilizes excess powerplant heat to thermally condition the hydrogen prior to entering the fuel cell.This paper outlines the development cycle of a liquid hydrogen evaporator heat exchanger; with a focus on the role of simulation in determining key design features necessary order to meet the stringent requirements over the wider operating envelope of the device. Insights into the thought process behind selecting the right simulation approach and stepping through complexity are given.Solutions necessary to minimize the risk of icing of the heating fluid are presented, in the form of both operating requirements as well as geometrical design of the device. Assessments conducted to verify that the large thermal gradients does not compromise the structural integrity of the device are also summarized. The key performance metrics, related to the efficiency and integrity of the evaporator are also outlined.Finally, the paper summarizes the performance testing conducted and test results obtained which validated the design, ahead of it being integrated into the thermal management system developed by the NEWBORN team.
Presented By Christian Gscheidle (Fraunhofer SCAI)
Authored By Christian Gscheidle (Fraunhofer SCAI)Jochen Garcke (Fraunhofer SCAI) Rodrigo Iza-Teran (Fraunhofer SCAI)
AbstractThe Proper Orthogonal Decomposition (POD) has been used for several years in the post-processing of highly-resolved Computational Fluid Dynamics (CFD) simulations. While the POD can provide valuable insights into the spatial-temporal behavior of single transient flows, it can be challenging to evaluate and compare results when applied to multiple simulations. However, the analysis of bundles of simulations is a commonly needed task in the engineering design process, for example, when investigating the influence of geometrical changes or boundary conditions on the flow structure. A manual comparison of many simulations can be very time-consuming. On the other hand, studying only scalar and integral quantities of interest will not be sufficient to understand all relevant aspects of the flow.Therefore, we propose an automated workflow based on data-driven techniques, namely dimensionality reduction and clustering. The aim is to extract knowledge from simulation bundles arising from large-scale transient CFD simulations. We apply this workflow to investigate the flow around two cylinders as a practical example. In close proximity to the cylinders, complex shear layer interactions take place that lead to varying modal structures in the wake region. A parameter study is designed by changing the relative position of the two cylinders to each other. As a result, multiple clusters in the parameter space can be identified that each show a similar characteristic flow behavior. A particular emphasis lies in the introduction of in-situ algorithms to compute suitable data-driven representations efficiently and concurrently to the run of a simulation on the compute cluster. The in-situ data analysis approach reduces the amount of data in- and output, but also enables a simulation monitoring to reduce computational efforts, e.g., a data-driven early stopping or outlier analysis when running several simulations in engineering design studies. Finally, a classifier is trained to predict characteristic physical behavior in the flow only based on the input parameters, i.e., the relative positions of the two cylinders. This allows us to predict the principal flow dynamics for unseen parameter combinations without running additional, expensive CFD simulations.
Presented By Hervandil Sant'Anna (Petrobras)
Authored By Hervandil Sant'Anna (Petrobras)Carlos Eduardo Simoes Gomes (Petrobras)
AbstractThis study addresses the structural analysis of the wagon gate of one Petrobras dam, which is a critical structure for water intake by the upstream refinery. Built in 1967, the dam and its accessory structures have undergone corrective maintenance over the years. Following the accidents in Mariana and Brumadinho, the National Water Agency (ANA) revised inspection procedures, classifying this dam as low immediate risk but with high potential for associated damage in case of failure. The main objective of this study is to evaluate the current structural conditions of the wagon gate, which ensures the tightness of the pipeline by blocking about 22 meters of water column. The methodology used is based on elastoplastic stress analysis according to the API 579-1/ASME FFS-1 (2016) code, with the construction of an "as-built" model of the structure and thickness measurements. The structural analysis was performed using the Finite Element Method (FEM), which allows a detailed assessment of stresses and deformations in the structure. The model was built based on drawings provided by the refinery and thickness measurements taken on the plates and beams that make up the gate. The mechanical properties of the material were obtained from the ASME II Part D code, and the stress analysis followed the API 579-1 methodology. The results indicate that the wagon gate, despite the natural deterioration process, does not present an immediate risk of structural failure. The numerical analysis considered hydrostatic pressure loads and the structure's own weight. Boundary conditions were defined to prevent rigid body movements, and soil stiffness was modeled based on the Vertical Reaction Module. The API 579-1 methodology allows the extrapolation of design codes, which was essential for the evaluation of the wagon gate, since the ABNT NBR 8883 standard, used as a design reference, was canceled in 2019. Elastoplastic stress analysis requires the multiplication of load combinations by load factors, as described in table 2D.4 of API 579-11. The safety coefficient was obtained from NBR 8883 and applied in the evaluation of the risk of plastic collapse failure. In addition to the analysis with the safety coefficient, additional simulations were performed to verify the actual state of stresses and deformations in the structure, considering different thickness conditions in the gate components1. The thicknesses were measured by refinery on 29/5/2020, and additional hypotheses were considered for regions without direct measurements. This study was essential to ensure the safety of workers during the maintenance of downstream components of the gate and to ensure the structural integrity of the dam. The detailed analysis of the current structural conditions of the wagon gate provides essential information for decision-making on future maintenance and risk mitigation measures.
BiographyI have a degree in mechanical engineering and a master's degree in structural integrity. I have been working at Petrobras for 20 years in the area of static equipment inspection, performing stress analysis and fracture mechanics assessments. I am currently exploring Data Science and Artificial Intelligence.
Presented By Louise Wright (National Physical Laboratory)
Authored By Louise Wright (National Physical Laboratory)Liam Wright (National Physical Laboratory) Kathryn Khatry (National Physical Laboratory) Joao Gregorio (National Physical Laboratory) Michael Chrubasik (National Physical Laboratory) Maurizio Bevilacqua (National Physical Laboratory)
AbstractThere is a long history of the use of engineering simulation for design. This virtual approach is typically followed by physical prototyping, testing and refinement to reach a final design, followed in many cases by a physical testing regime to meet regulatory requirements. For many companies, the use of simulation tools has reduced the time and cost associated with getting new products to market due to the ability to explore multiple designs, and has reduced resource usage and improved product quality by enabling exploration of aspects of manufacturability and long term in-use performance. Companies are increasingly seeking to gain similar benefits beyond the design stage. Some companies whose products are subject to extensive regulatory testing requirements are seeking to provide evidence of compliance through a combination of simulation and testing. It is common in such industry sectors to have a “testing pyramid”, where the safety of a complex multi-component product is demonstrated by carrying out tests of materials, components, assemblies and complete systems, with the number of tests carried out decreasing as the complexity of the object under test increases. A “smarter testing” approach would replace some of these physical tests with simulations and would feed information between the various tests to improve the validation of the simulation and the confidence in the evidence of safety. Some products cannot be fully tested via physical testing alone because they cause a risk to human safety. An example of current relevance is autonomous vehicles. The artificial intelligence (AI) that controls an autonomous vehicle is often trained on data obtained from human-controlled journeys of a vehicle with the sensor suite in operation, so that the AI is shown what safe driving looks like under typical conditions. However, many of the situations most likely to lead to an accident are not encountered in typical driving conditions, and could cause risk to life if recreated deliberately. A simulation can potentially recreate high-risk scenarios safely for both training and testing purposes. Some products can significantly improve lifetime prediction and understanding of real-world performance by linking models and data in a digital twin. This approach can lead to improved future design iterations and more effective maintenance plans as the understanding of the product improves. Application of this approach could support personalisation of devices such as medical protheses, where monitoring, adjustment and individualisation could significantly improve people’s lives.In all of these applications it is important to note that the company developing the simulations are not the only people that need to have trust in the simulation results. That trust needs to be shared by regulators, end users, and in some cases the general public. These three seemingly distinct themes of activity are strongly linked, not least because they have the same need underpinning them: they need to combine validated models and measured data to make trustworthy predictions of real-world behaviour. This need can be answered most efficiently by a combination of activities in several technical areas, including data quality assessment, software interoperability, semantic technologies, model validation, and uncertainty quantification. The technology readiness level of these areas is varied, and the level of awareness and uptake of good practice of each technical area varies across sectors.This paper discusses the common features, and differences between, the fields of smart testing, virtual test environments, and digital twins. Starting from a consideration of commonality we will highlight areas where existing methods and expertise could be better exploited, and identify areas where further research and development of tools would accelerate successful application of trustworthy digital assurance approaches in industry.
Presented By Tobias Ulmer (Airbus Operations)
Authored By Tobias Ulmer (Airbus Operations)Niklas Dehne (Airbus Operations GmbH) Vickram Martin Singh (Airbus Protect GmbH)
AbstractSince multiple years, the Airbus Test Center Bremen successfully performs aircraft physics Virtual Testing (VT). The approach of VT by means of computer simulations of physics-based models has been applied for all major aircraft development programmes of the last years since the Airbus A380.In the given context and for the questions of interest, flexible multibody simulation (MBS) has been identified as one of the preferred simulation methods.Compared to Finite Element Model (FEM) simulations, the application of enforced displacement data as well as distributed airloads on flexible bodies can be challenging in MBS, especially if the model and the applied boundary conditions are to be kept separate from each other. In FEM simulations, the different model components are generally assembled by means of include statements immediately before solving, hence providing an ideal separation between the structural definition of the considered product and the applied boundary conditions. Whereas in MBS simulations, besides the definition of the mechanism under consideration, also the applied loads and displacements are generally merged into the model and defined inside the model. In consequence, this requires updates of the complete model in case of updated boundary conditions and easily leads to an undesirable high number of model variants and/or models which are cramped with boundary conditions.The paper describes the selected approach to maintain the separation of model and boundary conditions by means of user subroutines for application of loads and enforced displacements.It gives an overview of the different approaches for application of different representations of airloads on different representations of flexible bodies.It focuses on the approaches for application of lumped force vectors on flexible bodies based on 1D beam elements by scaled, triangular sets of modal forces as well as flexible bodies based on 2D shell elements by distribution via rigid body elements.Besides that it introduces an approach for application of enforced displacements originating from linear FEM calculations under approximation of geometric nonlinearity.
AbstractThe design and certification process of aerospace and automotive composite structures is notably time-consuming and costly due to the necessity of thousands of physical tests to obtain design allowables. These tests significantly limit specimen configurations to a few stacking sequences and geometric parameters.In this presentation, we will introduce an efficient finite element (FE) - based framework designed to perform high-fidelity progressive failure analyses of fiber-reinforced polymers. This framework extends beyond the micro-scale fiber-matrix unit cell to encompass composite laminates at the macro-scale. Initially, cure-induced residual stresses are calculated using a coupled chemo-thermo-mechanical analysis. Following the determination of residual stresses and deformed geometry, a progressive failure analysis (PFA) step is conducted.The FE framework for PFA is based on a semi-discrete modeling approach, which aims to achieve a compromise between continuum and discrete methods. The laminate is modeled in a layer-wise manner (meso-scale), where each lamina is connected via cohesive elements or contact. The enhanced semi-discrete damage model (ESD2M) toolset includes an automated smart meshing strategy with failure mode separation, the enhanced Schapery theory with a novel generalized mixed-mode law, and various probabilistic modeling strategies. These combined modules make the computational tool highly effective in accurately capturing various failure modes, such as matrix cracks, fiber tensile failure, and delamination.The underlying principle of the semi-discrete mesh is to partition each lamina layer into strips and bulk regions. The finite width of the strips is chosen to be much smaller than the bulk distance for higher accuracy. While the bulk elements only capture fiber failure modes, the strip elements can capture both fiber and matrix failure. Each strip is assigned with random values of failure strengths associated with matrix failure, while the axial strength and delamination (cohesive) properties are randomized using a global field function. Since the model can be run in a probabilistic manner, Monte Carlo simulations can be performed to predict the structural response and its scatter. The material damage and failure is modeled using a non-linear constitutive model, combining Schapery theory and the crack band method, which is implemented in a VUMAT user subroutine.The model is validated using various test cases, including unnotched and open-hole laminates subjected to quasi-static tensile and compressive loading.
Presented By Jürgen Zechner (Tailsit)
Authored By Jürgen Zechner (Tailsit)Lars Kielhorn (TAILSIT GmbH) Thomas Rüberg (TAILSIT GmbH)
AbstractThe numerical simulation of electromagnetic field problems often requires to incorporate the unbounded volume that surrounds an object of interest. This exterior region cannot be fully captured by a domain-based discretization scheme like the Finite Element Method (FEM) and requires a truncation of the domain in combination with complicated boundary conditions. However, the surface-based Boundary Element Method (BEM) exactly represents such exterior regions without the need of a volume mesh, but can only handle a linear and homogeneous material behavior. The coupling of FEM and BEM thus leverages the advantages of FEM for electromagnetic simulations and BEM for modelling exterior domains, resulting in a comprehensive solution framework that is perfectly suited for the simulation of multi-physics phenomena.This coupling approach is especially useful when analyzing situations in which conducting domains and/or magnets move with respect to each other and, possibly, deform due their mutual electromagnetic forces. Now, the mechanical behavior of such objects is calculated with a FEM model whose mesh is then used within the FEM-BEM coupling to compute the forces. The change of constellation due to the mechanical deformation does not require a re-meshing of the computational domain. Prominent examples are electromagnetic metal-forming, sensors and actuators consisting of electro-magnetostrictive materials, and magnetohydrodynamics.We are currently developing a flexible, and yet easy-to-use framework to address such kinds of multi-physical problems. This is achieved by using an open-source coupling library for multi-physics simulations which offers a well-defined API for exchanging data between a wide range of different numerical solvers. In this work we present an adapter for our FEM-BEM coupling code.Through numerical experiments and validation against measurements and analytical solutions, the efficiency and accuracy of the proposed approach is demonstrated across a range of scenarios like simple eddy current brakes or the TEAM benchmark problem 28.This work contributes to advancing the understanding and simulation capabilities of multi-physics phenomena involving electromagnetic interactions, paving the way for enhanced design and optimization of electromechanical systems in diverse engineering applications.
Authored & Presented By Bob Tickel (Cummins Engine Company - Technical Center)
AbstractDesigner level tools ever improving and sometimes embedded into CAD packages for easier access for designers. The tools are so easy to use that designers often prefer to use the standalone tools rather than within their CAD tool. These tools also don’t typically require skills in creating or even knowledge of various element types, mesh sizing, etc. The tools also can have Multiphysics and optimization capabilities. And to top it off, they can provide almost instantaneous results. These tools are aiding the design of components and subsystems in near real time by providing physics-based insights that allow better design decisions very early in the design and development process. In addition to designers using simulation tools, where a mature simulation culture exists, project leaders and decision makers expect and rely on simulation results to make product and design decisions. This in turn drives analysts, designers, project leaders, etc. to develop not just one-off simulations, but large DOE’s, AI/ML for fast running reduced order models, optimization, etc. – all to increase the speed and quality of simulation-based decision making. Cloud and HPC systems have enabled the large-scale adoption of simulation. Simulation and Process Data Management tools are also becoming increasingly relied upon for model storage and re-use and can enable significant engineering efficiency improvements. The speed and power of computer resources will continue to increase driving a simulation culture to do more, faster, better. GPU’s are already having a significant impact today and things like Quantum computing are being used in some industries. As tools become easier to use and simulation tools, processes, models and results become ubiquitous across an organization, what is the role of the traditional simulation expert? The author recommends 4 key things: • Continued deep understanding of physics and tool capabilities;• Mentor and guide others;• Process automation/management with focus on Multiphysics;• The new, unique and difficult.This paper / presentation will go into more detail on these 4 items and provide additional details regarding the democratization of simulation tools and methods.
DescriptionThis short workshop will provide information on the NAFEMS Professional Simulation Engineer (PSE) Certification scheme, presenting why it is useful, how it works, the application process and some client case studies.PSE enables simulation engineers to demonstrate competencies acquired throughout their professional career, located in the PSE Competency Tracker which will be described at https://www.nafems.org/professional-development/certification/.
Presented By Donald Kelly (New South Wales University)
Authored By Donald Kelly (New South Wales University)Garth Pearce (School of Mechanical and Manufacturing Engineering)
AbstractThis paper updates research presented at the 2011 NAFEMS World Congress in Boston in the paper identified below. It will present recent work on the vector based approach developed to identify the internal load transfer in two-and three-dimensional elasticity. The vectors in the field are defined by a column of the stress tensor. The theoretical work in the 2011 paper showed that contours plotted through the vector field identified walls across which load did not flow, isolating paths for transfer of load across the domain. Using the stress results from a finite element analysis, load paths were plotted that enhanced the post-processing of finite element results. The new work extends the old approach by allowing the magnitude of the load to vary along the path as load is introduced by a body force. This allows the approach to include gravity loads, and the transfer of moments and shear forces in plates and shells. New work also identifies that load transfer in trusses and frames are a special case where the paths defined by the vectors are parallel to the axes of the members. It is also emphasized that paths need to be investigated for each of the three directions identified by the three columns of the stress tensor in three-dimensional elasticity. A plot of a path for a single load direction (say the x-path for a load applied in the x-direction using vectors defined by the first column of the stress tensor) can miss y-direction and z-direction load transfer due, for example, to bending moments arising from the applied load. New work is also considering the three-dimensional load transfer within laminated composite materials including load transfer between plies in the presence of geometric discontinuities and ply drops. Applications by the authors will include a selection of examples identifying load paths in building frames and static load transfer in bolted joints, curved shells and composite laminates. D. Kelly, G. Pearce, M. Ip, and A. Bassandeh, "Plotting load paths from vectors of finite element stress results," presented at the NAFEMS World Congress 2011, Boston, USA, May 2011. Available at: https://www.nafems.org/publications/resource_center/nwc11_163/
BiographyDonald Kelly is an Emeritus Professor at the University of New South Wales (UNSW Sydney) and a Fellow of the Royal Aeronautical Society. While a lecturer at the University College of Swansea (UK) he published research into error estimates and adaptivity for the finite element method. After joining UNSW his research focussed on the design and analysis of composite structures. He retired from the teaching staff at UNSW in 2010 but has continued his research collaboration with Professor Garth Pearce on the theory and application of load paths. Since retiring he has worked for Strand7 developing finite element software.
Authored & Presented By Sacha Jelic (ThermoAnalytics)
AbstractWith summers getting warmer and warmer every year, urban climate thermal management is getting critical in order to avoid extreme temperatures in urban city environments, which can lead to discomfort, health issues or even death, especially for elderly persons.During summer, ambient temperatures rise significantly, with solar radiation from the sun playing a major role in intensifying urban heat. This solar energy heats concrete buildings, asphalt roads, and even pedestrians, exacerbating the urban temperature. Infrastructures heat easily, due to their high solar radiation absorption coefficients. They cooldown however very slowly due to their high mass and specific heat. This can make the pavement temperatures go much higher than the ambient temperature and reach temperatures exceeding 70°C. These elevated surface temperatures will in turn increase the local air temperatures, leading to discomfort for humans.This study demonstrates how the urban climate can be predicted around buildings or urban environments over long durations. A CFD solver that simulates the wind and convection can be coupled to a thermal radiation/conduction simulation tool. The thermal tool is able to simulate the incoming solar from early mornings to evening, so with a moving sun, without using large simulation time or computational resources. Convection from CFD is imported at given times or linearly interpolated with respect to windspeed and direction, which then allows temperature predictions over days, months or even a year.It will be shown how certain design choices can influence local temperatures. An illustration of this is the application of solar reflective glass windows on buildings, so that the building does not absorb the incoming radiation. Other examples are the addition of trees in the streets. They create shadow and don’t heat-up as much as concrete. Another one is the use of green roofs, which facilitate evaporation through plant transpiration, but also provide an insulation layer for the building.Such design changes lower the temperature in and around buildings, improve the comfort and reduce the energy demand that is required for active cooling systems like air conditioning.
Authored By Barbara Neuhierl (Siemens Digital Industries Software)Thomas Eppinger (Siemens Digital Industries Software)Jan-Willem Handgraaf (Siemens Digital Industries Software)
AbstractCarbon capture, utilization and storage (CCUS) is considered as an effective approach to reduce greenhouse gas emissions. CCUS is a multistep process involving the capturing of carbon dioxide produced by industrial sources, such as power plants or factories, before being released into the atmosphere, transporting it to a storage location, and then securely storing it underground or in other suitable reservoirs. Ongoing research and development activities are focused on improving the efficiency and scalability of carbon capture systems. Separation through membranes is one of the most widespread methods for isolation. There are different types of membranes for CO2 separation: The most frequently used ones are made from polymers of organic compounds. Nevertheless, inorganic formulations (e.g. zeolites) can provide very high thermal and chemical stability (generally at the cost of lower permeability), and new advanced membrane materials are emerging, including composite, MOF (metal-organic framework), ZIF (zeolitic imidazolate framework) or CMS (carbon molecular sieve) membranes. Compared to other carbon dioxide isolation methods, like amine washing, membrane separation has the advantage of being highly energy efficient, as typically no heating is required.This paper presents an approach for simulating carbon dioxide membrane separation. Firstly, the relevant membrane properties are investigated with the help of computational chemistry. A zeolite membrane and a polymer membrane are examined. The membrane permeability is determined using a multiscale computational chemistry approach. In doing so, various length and time scales and reactions, from atomistic level up to micrometer scale are represented, taking into account quantum mechanical properties as well as molecular mechanics behavior. The approach is validated through successful comparison with properties documented in literature.In the next step, permeability properties are derived from the computational chemistry simulation for the membrane. These are then used within a finite volume based CFD (Computational Fluid Dynamics) simulation of a separation unit. To model the membrane, a baffle interface which is selectively permeable is applied inside a cavity. Flue gas modelled as a mixture containing several species is introduced into the system, which has two outlets. The permeate passes the membrane and can leave the system through an outlet on the other side of the membrane.The goal of the simulation is to be able to assess the efficiency of the separation process regarding the design and dimensions of the ducts and cavities as well as process parameters like flue gas concentration and flow velocities. It allows also to virtually design and test new membrane materials. In this way, a digital twin of the CO2 separator including the detailed membrane formulation can be realized.
Authored & Presented By Fumio Nakayama (FunctionBay)
AbstractMulti body dynamics simulations including flexible bodies haves been performed for some time, but most of these flexible bodies are based on the mode synthesis method. Advances in analysis technology have made it possible to use flexible bodies that handle nodes directly. While mode synthesis flexible bodies can only be handled in the linear range, node-based flexible bodies have the advantage of being able to handle large deformations, nonlinear materials, and contacts.Furthermore, the ability to handle node-based flexible bodies has made it possible to apply it to other physical fields such as heat transfer and fluid dynamics, and the multiphysics of multi body dynamics analysis is progressing.This paper introduces an automobile disc brake model as an example of combining multi body dynamics analysis, structural analysis, and heat transfer analysis. Disc brake systems are currently installed in most automobiles and motorbikes. A disc brake system is a system that obtains deceleration force by using the frictional force generated by pressing the pads against the disc. However, when the disc and pads become hot due to the generation of frictional heat, problems occur in which the braking effect deteriorates. In addition, problems such as brake squeal can occur. Performing simulations to address such problems and predicting these physical conditions is extremely useful in design and development of the brake system.This analysis model consists of a brake disc and brake pads, and the brake pads are pressed against the rotating brake disc to generate friction force and decelerate the vehicle. The heat distribution and thermal deformation of the brake disc due to the frictional heat generated during this process were evaluated.These results show that it is now possible to evaluate all results, such as deformation, stress distribution, and temperature distribution of the disc during braking, in a single analysis using a single analytical model.
BiographyFumio Nakayama had been working as an engineer at a FEM software company since 1998, where he was responsible for structural, heat transfer, and fluid analysis. He joined FunctionBay K.K. in 2007 and works as an engineer of multi-body dynamics(MBD).
Presented By Panagiotis Fotopoulos (BETA CAE Systems SA)
Authored By Panagiotis Fotopoulos (BETA CAE Systems SA)Michael Richter (MATFEM Ingenieurgesellschaft mbH, Germany)
AbstractIn recent years, the pursuit of lightweight products has become a critical objective, particularly in the automotive industry. An increasing number of components are being replaced with non-reinforced and fiber-reinforced plastic materials to meet stringent demands for both weight reduction and safety. However, this shift introduces significant challenges in industrial crashworthiness and pedestrian safety simulations, especially for larger components with locally varying mechanical properties influenced by the injection molding process.From a simulation perspective, addressing these challenges requires a multi-step process: (i) conducting an injection molding simulation, (ii) mapping the resulting material properties onto the structural mesh, (iii) preparing accurate material model definitions, (iv) performing the structural simulation, and (v) analyzing the results. While effective, this workflow can be resource-intensive, particularly during the early design stages when design geometry is still evolving, and full-scale injection molding simulations with standard solvers are prohibitively costly in terms of time and budget.To tackle these challenges efficiently, this work proposes, additionally to the standard injection molding analyses, a simplified solution that streamlines the integration between injection molding and structural simulations. This approach enables multiple "what-if" studies and optimization loops, facilitating rapid iteration during early design stages, while maintaining decent quality results. Additionally, this helps to bridge the gap between structural and molding engineers by introducing simplified solvers, empowering engineers from various disciplines to explore molding effects without extensive expertise.A benchmark study was conducted to demonstrate the effectiveness of this approach using a common plastic compartment subjected to standard loading tests. For this purpose, a common plastic part of hat-profile shape was picked. Simulations were performed with both conventional isotropic material models and high-fidelity orthotropic material models derived from injection molding simulation data. The results highlight and quantify the impact of manufacturing processes on the component’s crashworthiness, underscoring the importance of incorporating manufacturing effects into structural simulations.
BiographyPrincipal Customer Service Engineer at BETA CAE Systems. After his degree from the Technological Institute of Thessaloniki (Greece), he obtained his master’s degree in automotive engineering at Coventry University (UK). He started his career as a CAE engineer at BETA CAE Systems and for more than 15 years he serves in the Customers Service team support and research on variable disciplines and solutions. He specializes in solutions for plastics and injection molding applications.
Presented By Sangtae Kim (FunctionBay)
Authored By Sangtae Kim (FunctionBay)Danny Shin (FunctionBay, Inc.)
AbstractMulti-Body Dynamics (MBD) is a technology used to analyze assemblies in motion, widely applied across numerous industries to study systems such as ground vehicles, aircraft, robots, and other mobile devices with intricately connected components. Initially founded on the assumption of rigid bodies to simplify governing equations, MBD has evolved to incorporate various physical phenomena, improving simulation accuracy and better reflecting real-world behaviors.This paper discusses the advancements in modern MBD, focusing on the integration of Finite Element (FE) methods with MBD simulations, particularly in cases involving large deformations and contacts between flexible and rigid bodies. With engineering systems becoming increasingly complex, the need for accurate modeling and simulation techniques has grown significantly.Traditional MBD methods, while effective for analyzing rigid body dynamics, often struggle to accurately capture the complex behaviors resulting from the deformations of flexible components under diverse loading conditions, especially when nonlinear behaviors such as contacts are involved.The coupling of FE methods with MBD enables a more comprehensive analysis by incorporating material properties and nonlinear behaviors. This integrated approach allows engineers to simulate real-world conditions with greater accuracy, leading to improved predictions of system performance. The paper presents several case studies demonstrating the application of FE-MBD coupled dynamic simulations and static simulations in complex mechanical systems, such as vehicle suspension systems and robotics.Additionally, the paper explores the coupling of Computational Fluid Dynamics (CFD) with MBD, highlighting relevant case studies. Many mechanical systems interact with fluids like lubricants or water, which means they need to deal with problems like oil churning, liquid splashing, or water getting into parts.Recent advancements have enabled the simulation and analysis of these challenges using combined MBD and CFD techniques. In particular, this coupled analysis can incorporate flexible bodies, allowing for the consideration of deformation and stress induced by fluid interactions. This capability enables the simulation and verification of processes such as fluid pouch filling.In conclusion, integrating FE analysis with MBD marks a significant advancement in engineering simulation. By focusing on large deformations and contact interactions, this coupled approach enhances simulation fidelity and provides valuable insights that drive innovation in design and analysis. The paper highlights the ongoing development and application of these multiphysics techniques based on Multi-Body Dynamics to meet the evolving demands of complex engineering systems.
Authored & Presented By Sunit Mistry (AWE)
AbstractCubeSats are a class of small satellite constructed from 10cm3 cube units, where one or more units of cubes can be combined to make a satellite. The CubeSats are modular in nature and often exploit COTS components for the chassis and electronics. The CubeSats tend to be launched as secondary payloads which, combined with the modularity, can make them more affordable than conventional satellites.The space environment in which CubeSats operate have several extreme conditions which do not occur on earth, including vacuum, electromagnetic, radiation and thermal. The thermal environment that satellites need to survive includes extreme cold temperatures due to the vacuum of space, coupled with direct insolation and heat generating components with no convective cooling due to the vacuum environment. As such, Finite Element (FE) models of CubeSat designs are typically required to accurately assess the thermal performance of CubeSats.To design a CubeSat to meet its mission objectives, the CubeSat requirements can often conflict with each other and need to be balanced. For example, a CubeSat component may have a thermal envelope that requires a heater to meet, but this will affect the electrical power budget of the CubeSat. Therefore, the rapid assessment of the thermal performance of a CubeSat becomes highly desirable. However, building CubeSat thermal models is currently manual and time consuming and it was proposed that an automation process could be devised to expedite model development. The process was aimed at reducing the time it takes to create thermal CubeSat models, to improve the system design process. The ultimate aim is to completely automate the process such that optimisation algorithms can be used to optimise satellite designs to best meet their mission objectives.The Siemens NX CAD software alongside the integrated Simcenter simulation software was utilised for generating the model. The modular nature of most CubeSats meant that it was possible to build a library of common component level FEA models that could be used to assemble into an FEA model of the full assembly. This paper will describe the solutions developed to automate this process and discuss any future developments including the use of optimisation algorithms to aid the design process.
11:50
Authored & Presented By Francois Mazen (Kitware Inc)
AbstractThe term “Digital Twin” may correspond to many different definitions, from the scientific modeling of a physical phenomenon to an enhanced virtual supervision system of a complex industrial setup. We consider in this presentation the specific flavor where we want to gather data from an actual system, usually through sensors, feed a numerical model with the real time data, visualize the result, make predictions with derived quantities, and eventually influence the actual system with a feedback loop.However, this process is not new. It’s very similar to the supervision process of industrial installations. The main difference is the recent technological breakthrough to manage very large data as input, throughput and output of the models, which leverages novel artificial intelligence based techniques. Processing large data also means using High Performance Computing capabilities to tackle data that a classic workstation will not be able to open and process.Visualization is a powerful tool for Digital Twins. For example, while the Digital Twin is running, being able to extract meaningful information from sensor data and to display them is a natural way to monitor the actual system. But scientific visualization could also be successfully used while building the Digital Twin and monitoring the correctness of the building process. When data is massive, specialized software with client/server architecture should be used for data analysis on high performance computers. These scalability challenges with massive data have been solved for many decades now and by combining these tools in a Digital Twin, the users can interact with large data living in a computer cluster while keeping a responsive user experience. Under the hood, the users would interact remotely with the data and trigger new analysis which will be computed on the cluster where the massive data is gathered. Only the result of the analysis and visualization will be streamed to the users, like images or videos. There is a large ecosystem around these scientific visualization tools, including bridges to web front-end, Virtual Reality devices, Python AI libraries, or advanced rendering back-end. Some real use cases are climate predictions made by DKRZ, where meaningful features are extracted from Petabytes of data, or Weather prediction scenarios from The Cyprus Institute. Another example from academic studies is the CALM-AA project where aero-acoustics measurements are visualized through Virtual Reality headsets with live interaction with the data. We also mention the VESTEC project, funded by the European Union, for urgent decision making using ensemble simulation and real time analysis using in-situ techniques. This project demonstrates that early feedback of ensemble simulation has a meaningful impact on urgent decisions and could save time, resources and human lives. These use cases emphasize that scientific visualization tools for massive data can gather information from a real system, visualize it in 3D and send back information, actually closing the loop.Artificial Intelligence techniques are at the core of modern Digital Twins, because despite the data size growing, then the prediction, visualization and feedback loop should stay as close as possible to real-time. Deep Learning surrogate models are one of the techniques to ensure the expected performance, and the system should be able to run inference of Deep Learning models and use the output to display meaningful data. We will demonstrate that the in situ capabilities of scientific visualization tools could also be leveraged during the Deep Learning network training. These tools allow to visually monitor the output at the end of each epoch and efficiently tune the training process for better prediction and performance.In conclusion, Scientific Visualization tools are crucial for modern Digital Twins requiring advanced processing and prediction for large data to make decisions and influence the modeled system. They could be leveraged during the building phase and the tuning phase of the Digital Twin, in addition to being an essential part of the execution runtime of the Digital Twin.
BiographyFrançois Mazen is the Director for the Scientific Visualization team at Kitware Europe. Throughout his career, François has focused on developing software solutions for scientific communities from research to industry. This topic includes numerical simulation methods, performance, Artificial Intelligence and large software architectures. In 2008, François received his engineering degree at IFMA (French Institute for Advanced Mechanics) in Clermont-Ferrand where he was nominated for TOP 10 students. The same year, he also received a Master of Science at Université Blaise Pascal (Clermont-Ferrand) where he specialized in Rigid Body. Before joining Kitware, his previous 13 years of experience included 4 years at Ansys as a Software Developer to customize the Ansys Workbench application for specific customer requirements. He also worked for more than 6 years at Siemens PLM as Project Leader and Software Architect, where he integrated Robot’s Path Planning Technology in large applications like Siemens NX, Dassault Catia and Tecnomatix Process Simulate. François is an open source enthusiast as he has been contributing to several open source softwares since 2007. Currently, he is an active member of the Debian Operating System development team. Since 2021 he has led Kitware’s European Scientific Visualization team, including team management, strategy, technical expertise, operational excellence and business development.
Presented By Rishi Ranade (NVIDIA)
Authored By Rishi Ranade (NVIDIA)Mohammad Nabian (NVIDIA) Ram Cherukuri (NVIDIA)
AbstractAerodynamic analysis plays a critical role in vehicle design, as it directly impacts fuel efficiency, stability, and overall performance. Traditional computational methods, such as CFD, can provide accurate predictions but are computationally expensive, particularly for complex geometries like those found in cars. Several machine learning (ML) models have been proposed in the literature to significantly reduce computation time while maintaining acceptable accuracy. However, ML models often face limitations in terms of accuracy and scalability and depend on significant mesh downsampling, which can negatively affect prediction accuracy.In this work, we propose a novel ML model architecture, DoMINO (Decomposable Multi-scale Iterative Neural Operator) developed in NVIDIA Modulus to address the various challenges in modeling the external aerodynamics use case. NVIDIA Modulus, which is a state-of-the-art, open-source scientific machine learning platform that enables research, development and deployment of surrogate ML models for a wide range of CAE applications. DoMINO is a point cloud-based ML model that uses local geometric information to predict flow fields on discrete points. It takes the 3-D surface mesh of the geometry in the for of an STL as an input. A 3-D bounding box is constructed around the geometry to represent the computational domain. The arbitrary point cloud representation is transformed into N -D fixed structured representation of resolution m × m × m × f defined on the computational domain. A multi-resolution, iterative approach described in the next section is used to propagate geometry representation into the computational domain and to learn short- and long-range dependencies. Next, we sample a batch of discrete points in the computational domain. When the model is training, these can be points where the solution is known (nodes of the simulation mesh) while during inference they can sampled randomly as the simulation mesh may not be available. For each of the sampled points in a batch, a subregion of size l × l × l is defined around it. A local geometry encoding is calculated in this subregion. The local encoding essentially a subset of the global encoding depending on its position in the computational domain and is calculated using point convolutions. Furthermore, for each of the sampled points in a batch, p nearest neighboring points are sampled in the computational domain to form a computational stencil of p + 1 points. The batch of computational stencils are represented by their local coordinates in the domain. The local geometry encoding is aggregated with the computational stencils to predict the solutions on each of the discrete points in the batchThe DoMINO model is trained and evaluated on the DriveAerML dataset [1]. Through our experiments we will demonstrate the scalability, performance and accuracy of our model as well as the scalable and performant end-to-end data, training and testing pipelines that we have developed for accelerated computing. We will also introduce a benchmarking utility developed in NVIDIA Modulus to compare DoMINO on several CAE specific metrics with other model architectures for the external aerodynamics use case. Finally, we will demonstrate the end-to-end, real-time inference and visualization of DoMINO using NVIDIA Omniverse.1. Ashton, N., Mockett, C., Fuchs, M., Fliessbach, L., Hetmann, H., Knacke, T., ... & Maddix, D. (2024). DrivAerML: High-fidelity computational fluid dynamics dataset for road-car external aerodynamics. arXiv preprint arXiv:2408.11969.
Authored & Presented By Laurence Vaughn (Dynisma)
AbstractDynisma Motion Generators combine class leading latency and bandwidth with large horizontal excursion and high acceleration. This delivers unrivalled vehicle dynamics simulation capabilities in the most immersive experience possible for a driver.Our customers range from motorsport teams looking to extract the most from their drivers and cars, through to automotive manufacturers looking to shorten the development times of new vehicles. In either case, the accurate representation of the driving experience is key to their success.From full system multi-body modelling right through to subsystem stress analysis, simulation and optimisation form the backbone of the development process of Dynisma Motion Generator systems. They are designed to target world leading bandwidth long before confirming it in comprehensive real-world testing programmes. Within Dynisma the successful use of CAE is essential as we do not build prototypes, so every motion generator that we design and build has to perform as expected when the customer uses it. This is combined with demanding project timescales that require us to be not only agile but also to select the appropriate simulation tool and methodology to suit the requirements. Our CAE needs vary depending on the design stage, therefore we use a range of CAE tools, including Matlab, Simulink, Simsolid and HyperMesh, combining the outputs of tools to input to others to achieve our goals.One of the biggest challenges is determining what loads are introduced into the structure by the operation of the motion generator, and how this is distributed: from the attachment of the seat supporting the driver, all the way through to the floor supporting the machine.This talk will outline the approach to CAE that we use throughout the design process, supporting concepts through to certification.Our approach to CAE in the design process will be illustrated with examples from the design of our latest simulator. This will show how system level models feed into the detailed assessments of components and ultimately successful certification for use across the world.
Authored & Presented By Govindaraju MD (TE Connectivity India)
AbstractThe Mini Modular Racking Principle (MiniMRP) constitutes a family of modular PCB enclosures tailored for avionic systems. Various MiniMRP variants are subjected to diverse terrain loading scenarios, encompassing dynamic, thermal shock, and drop tests. The aluminum enclosures of MiniMRP have been replaced with composite alternatives, necessitating compliance with designated standards. Conducting physical tests on specimens is time-intensive, leaving no room for redesign iterations post-physical testing. Designers aspire to swiftly predict assembly performance to facilitate quick design modifications prior to advancing to the physical test process.Component is made of short fiber composite material including E glass as reinforced material and PEI as matrix material. Fiber orientation mapping and its mesh mapping in CAE geometry is difficult task. Hence, Mold flow simulation is carried out to map the fiber and mapped files is imported to structural simulation. Material properties is calibrated as per fiber aspect ratio and applied. Simulation has been systematically conducted for all sets of physical test plans, identifying potential design failure risks and offering recommendations to meet design requirements. To verify dynamic structural strength, modal, random vibration, and harmonic analyses were carried out. Transient thermal analysis was employed to forecast component thermal behavior, while static structural analysis addressed torque loading and bolt pretension loading. Simulation was employed comprehensively to evaluate the structural integrity of the assembly.
Presented By Vishnuvardhan Ranganathan (ThermoAnalytics)
Authored By Vishnuvardhan Ranganathan (ThermoAnalytics)Sacha Jelic (ThermoAnalytics GmbH)
AbstractIn the event of significant structural or thermal abuse to a battery cell, elevated temperatures can trigger a runaway exothermic reaction that can propagate throughout the pack. During thermal runaway, the pressure relief mechanisms of the pack will result in venting (degassing) of hot, combustible gases. The potential for high temperatures and fire poses a significant danger to vehicle occupants and first responders. For this reason, regulations to guarantee 5 minutes for occupant egress before a hazardous situation in the vehicle cabin have been implemented in China, with similar regulations anticipated globally. To meet these global safety standards, OEMs must investigate the integration of runaway propagation prevention and damage mitigation methods within battery pack and vehicle designs. This paper presents a transient simulation workflow that is capable of simulating thermal runaway propagation of multiple cells inside the battery pack for a duration of up to 30 minutes and the effect on the external vehicle system components due to the degassing. The method is a coupled approach between a 3D CFD solver that simulates the convection and a 3D thermal simulation software that handles the conduction and radiation. By coupling and decoupling the convection from CFD and the conduction/radiation solver, long transient scenarios can be simulated with quick turn-around times without compromising on accuracy. The simulation includes a predictive method for determining when the next battery cell goes into thermal runaway, which will then automatically cause the cell temperature increase and venting. It will be demonstrated how this workflow can be applied to vehicle thermal analysis and heat protection studies, how to slow down the propagation of thermal runaway or even completely prohibit it by applying thermal insulation materials between the cells. Surrounding components can be protected by applying shielding, to avoid these components to melt or catch fire. The cabin compartment floor and carpet temperatures can be predicted, and insulation materials can be applied to avoid the temperatures inside the cabin to get too high or to catch fire, potentially endangering vehicle occupants.
Presented By Mohamed Ben-Tkaya (Safran Nacelles (Le Havre))
Authored By Mohamed Ben-Tkaya (Safran Nacelles (Le Havre))Gilles Dubourg (Siemens Digital Industries Software)
AbstractThe design and development of new nacelles is an iterative, complex, and costly process. The pre-dimensioning phase plays a crucial role in this process and in the selection of solutions. The product's complexity and its evolution within a complex and multiphysical environment necessitate interaction across various domains during the dimensioning phase (systems, fluids, thermal). This requirement leads to extended timelines and prolonged development periods. The classical approach, based on the finite element method, demands a setup and model preparation phase (meshing, connections, loading), necessitating highly qualified and specialized personnel for each domain as well as complex methodologies. Interactions between domains involve exchanging results and performing mapping steps between different tools used. To reduce development time, the implementation of a new methodology is essential. In this work, we present a pre-dimensioning methodology based on multibody modeling with Simcenter 3D. The main objective of this work is to develop a methodology that can be used by a non-specialized team and to provide a tool that can be shared by different users from various specialties. Many challenges were addressed during this work, including hyperstaticity, complex loading, flexibility, composite material properties, and non-linear systems.One of the main problems is how to predict the correct load on joints and actuators. Our study demonstrates the importance of considering the distribution of aerodynamic loads on parts for the proper dimensioning of actuators and fixation points, revealing that approximations using an equivalent loading point are insufficient. To meet this need, a specific module was developed to map the loads on a flexible body.The methodology validation was conducted by comparing the FEA results with the new approaches. This new approach has halved the dimensioning cycle and provided an easy-to-use tool without the need for extensive expertise in the field. Additionally, the tool opens up prospects for dynamic dimensioning coupled with system calculations.The good results and correlation observed using this methodology make it a strong candidate to replace the traditional finite element method during the preliminary design phases of various nacelle components, thereby significantly reducing the required time.
12:10
Presented By Andreas Nicklaß (GNS Systems GmbH)
Authored By Andreas Nicklaß (GNS Systems GmbH)Patrick Schroeder (GNS)
AbstractVirtual methods have played an important role in the development of technical products in the automotive industry for years. This generates enormous amounts of data, and it is established practice, at least among OEMs, to manage this data in a simulation data management system (SDM). In this presentation, we will show that such data management is not only essential for management and traceability, but will also use examples from the day-to-day practice of an OEM's simulation department to demonstrate how new possibilities for post-processing, reporting and data analysis can be exploited on the basis of structured data storage.The crash simulation and NVH examples presented show how advanced post-processing and sophisticated reporting templates help to quickly and clearly extract the key features from a collection of simulation results. Powerful methods for generating such templates make the rigid template system more flexible, and the templates can be easily adapted to extended requirements by users without in-depth knowledge of the template rules. This gives engineers the tools they need to focus on the issues at hand, rather than wasting time on data preparation.In another example, we show how visualization provides new insights and reveals correlations. These analyses are only possible because the simulation results can be easily selected in the underlying SDM system. Above all, it is also possible to quickly change or expand the data included in the analysis. The interesting features can also be used for quick access to the underlying calculation runs. These in turn can be traced back to the CAD data and load cases.The basis for these analysis options is the implementation of powerful post-processing routines, which have access to the simulation data and the associated metadata by using the API of the SDM system. These extensions are developed in close cooperation between the users and the experienced SDM experts at GNS Systems.
BiographyDr. Andreas Nicklaß has been working at GNS Systems GmbH since 2014. He works at the Sindelfingen (Germany) site. He heads the ‘Data Management’ team. AN studied chemistry. He received his doctorate in theoretical chemistry. After completing his studies, AN worked in academia, at a research centre and finally in the private sector in the field of scientific computing. AN's work at GNS Systems covers all aspects of the operation, implementation, customisation, further development and user consulting for simulation and process data management systems (SPDM). Other activities include consulting and the development of concepts for data management in virtual product development as well as for ADAS and autonomous driving.
Presented By József Nagy (eulerian-solutions)
Authored By József Nagy (eulerian-solutions)J. Nagy (eulerian-solutions e.U., Austria) Z. Major, Dr. J. Maier, DI. G. Seebach (Johannes Kepler University Linz, Austria) W. Fenz, S. Thumfart, M. Giretzlehner (RISC Software GmbH, Austria) A. Gruber, M. Gmeiner (Universite tsklinik fr Neurochirurgie, Austria)
AbstractBiomedical multiphysics applications like the Fluid-Structure Interaction (FSI) simulation of cerebral aneurysm pose a challenging task. Cerebral aneurysms are present in about 2-5% of the general population. The primary risk they pose is rupture, which can lead to subarachnoid hemorrhage, a condition associated with high levels of mortality and morbidity. Numerous studies have highlighted various medical, genetic, morphological, and fluid dynamic factors that influence the initiation, growth, and rupture of aneurysms. Treating aneurysms requires highly invasive procedures, which carry risks of complications. Therefore, it is crucial to make informed decisions regarding whether an aneurysm necessitates treatment, based on its proximity to rupture. Simulations can be a useful tool to support the decision-making process. Numerous studies utilizing computational fluid dynamics (CFD) have shown differences between ruptured and unruptured aneurysms in their fluid dynamic behavior. These phenomena serve as mechanical stimuli that are converted into biological signals, potentially leading to aneurysm growth and rupture. However, the geometry of the aneurysm influences these fluid dynamic parameters. Thus, the change in shape due to deformation of the soft tissue has to be considered also under certain circumstances. In literature, there are only a few computational studies focusing on structural mechanics of cerebral aneurysms.With the presented Fluid-Structure Interaction methodology we are not only able to assess the fluid dynamic behavior of soft tissues like cerebral aneurysms, but we can also assess structural mechanical phenomena in the blood vessel wall. With this we can understand the interaction of fluid mechanics and structural mechanics of aneurysms in more detail. With the automation of simulations of many cerebral aneurysms, it is possible to generate parameters and evaluate them statistically. With the help of univariate and multivariate regression analysis (General Linear Method, GLM) it is possible to derive regression functions, which can help assess the rupture risk of aneurysms, by considering geometry, fluid dynamic as well as structural mechanical parameters. This approach can help neurosurgeons in their decision making regarding the necessity of operative aneurysm treatment.
Authored & Presented By Alan Wegienka (Design Simulation Technologies, Inc)
AbstractFurniture tip-over accidents can be fatal, with more than 470 children in the United States (U.S.) having died from tip-overs since 2000. According to the U.S. Consumer Product Safety Commission (CPSC), tip-overs also lead to an average of 22,500 emergency room-treated injuries each year. Aiming to stop this kind of devastating incidents, the Stop Tip-overs of Unstable, Risky Dressers on Youth Act (STURDY Act) was passed by the U.S. Congress on 23 December 2022 and went into effect on September 1, 2023. The act requires that Clothing Storage Units (CSUs) be tested for stability under several scenarios prior to being approved for sale in the United States.CSUs are designed and manufactured by organizations with expertise in products made of wood and wood-like materials, many of them smaller companies. CAD is routinely used in the design process, but simulation use is less common and many of the CSU producers do not have simulation experts on staff.Multi-body dynamics (MBD) software can easily simulate the tests required by the STURDY Act, but even with easy-to-use MBD products, the setup requirements, in terms of time and expertise, of CSU models for stability tests is not attractive to CSU designers. The STURDY Simulator was developed to provide designers of CSUs with a quick and easy method of evaluating a CSU design against the requirements of the STURDY Act. The STURDY Simulator runs within the same CAD system used by CSU designers and utilizes an advanced MBD solver for the calculations. With a few minutes of setup and a few more minutes to run the simulation, a CSU can be evaluated virtually against the testing methodologies of the STURDY Act. The simulator also assists designers by providing information about the CSU that extends beyond the requirements of the STURDY Act.This paper describes in detail the philosophy and implementation of the STURDY Simulator.
BiographyAlan Wegienka obtained a Bachelor of Science in Agricultural Engineering from Michigan State University. He began his career as an Engineer at the John Deere Product Engineering Center, supporting the use of their mechanical CAD systems. Transitioning into the software industry, Alan took on roles at Schlumberger Technologies CAD/CAM Division and Aries Technology, Inc., where he managed sales, marketing, and technical operations throughout the Asia-Pacific region. In the mid-1990s, Alan co-founded Design Technologies International in Singapore, developing Multibody Dynamics software integrated with AutoCAD products. The company was acquired by Mechanical Dynamics, Inc., and Alan served as Vice-President, leading the Design Technologies Division and focusing on embedding Multibody Dynamics software into leading CAD systems. After the acquisition of Mechanical Dynamics by MSC.Software, Alan continued to lead the team charged with embedding simulation technologies in CAD systems. In 2006, he founded Design Simulation Technologies, Inc., and as President, has led the development of mechanical simulation software focused on designers and engineers. With over four decades of experience in mechanical design and simulation software, Alan has seen the industry grow from computer-aided graphics tools running on specialized hardware to the powerful 3D design and simulation tools running today on our notebook computers.
Presented By Matthias Kabel (Fraunhofer ITWM)
Authored By Matthias Kabel (Fraunhofer ITWM)Maxime Krier (Fraunhofer Institute for Industrial Mathematics ITWM) Marc Dillmann (Fraunhofer Institute for Industrial Mathematics ITWM) Heiko Andrae (Fraunhofer Institute for Industrial Mathematics ITWM) Fabian Welschinger (Robert Bosch GmbH) Jonathan Koebler (Robert Bosch GmbH) Armin Kech (Robert Bosch GmbH) Maiko Ersch (Robert Bosch GmbH) Sebastian Moennich (PEG Plastics Engineering Group GmbH) Jan Wolters (Institute of Plastics Processing (IKV) at RWTH Aachen University) Noah Mentges (Institute of Plastics Processing (IKV) at RWTH Aachen University) Fabio Di Batista (Institute of Plastics Processing (IKV) at RWTH Aachen University) Christian Hopman (Institute of Plastics Processing (IKV) at RWTH Aachen University) Felix Fritzen (SC Simtech, Data Analytics in Engineering, University of Stuttgart)
AbstractEver-increasing legal requirements and the pursuit of more energy-efficient washing machines make lightweight construction indispensable in this sector. A current challenge arises from the requirements for environmental labeling, which were tightened by the EU in March 2021. To receive the best energy label in the future, washing machines must be designed for higher spin speeds. The higher mechanical requirements associated with this can no longer be met with short-fiber-reinforced thermoplastic (SFT) components. Long-fiber-reinforced thermoplastics (LFT) are a promising alternative with better resistance to fatigue damage.We will present a multiscale simulation approach for the fatigue design of components made of fiber-reinforced plastics. This approach only requires the following measurement effort on tensile bars taken from plates for calibration:1. CT image to determine the fiber orientation (across the thickness of the plate).2. Incineration of the fiber-reinforced plastic to determine the fiber length distribution.3. Determination of S-N curves in which the drop in dynamic stiffness was also recorded during the measurement.Using fiber orientation and fiber length distribution, a virtual multilayer microstructure model is constructed that allows us to use elastic FFT-based full-field simulations to inversely calibrate the Young’s modulus of the plastic at the beginning of the fatigue measurement.Subsequently, we perform FFT-based fatigue simulations on virtual long-fiber-reinforced volume elements with different fiber orientations and (mean) fiber lengths. Using model order reduction methods, we obtain an effective material map for the fatigue behavior of the fiber-reinforced plastic that considers the local fiber structure. The recorded decreasing dynamic stiffness during the measurement of the S-N curves allows us to calibrate the fatigue speed of this material map.After additional calibration of a failure criterion at the S-N curves for notched and unnotched tensile bars, the time and location of failure of a component made of this fiber-reinforced plastic can also be efficiently predicted in three steps: 1. Run an injection molding simulation, e.g. with Moldflow. 2. Transfer the fiber orientations and (mean) fiber lengths to the FE mesh. 3. Run the fatigue simulation using the effective material card as UMAT in Abaqus. We will carry out this procedure for the leach container as an example and compare the predictions with component measurements. In addition, we will present the computational effort and the scalability of the individual steps.
BiographyMatthias Kabel is an applied mathematician and holds a PhD from the University of Hamburg, Germany. He is deputy head of the Department of Fluid and Material Simulation and team leader for Lightweight Design and Insulating Materials at the Fraunhofer Institute for Industrial Mathematics (ITWM) in Kaiserslautern, Germany. His research interests include partial differential equations (with FFT-based methods), multi-scale methods for composites, digital rock physics (DRP), high performance computing (HPC) and quantum computing. He has authored or co-authored over 40 scientific articles. His developments are distributed as part of the GeoDict software (www.geodict.com) by the Fraunhofer spin-off Math2Market.
Presented By Tobias Bernarding (Dassault Systemes Deutschland)
Authored By Tobias Bernarding (Dassault Systemes Deutschland)Gerhard Oettl (Dassault Systemes Deutschland GmbH) Majid Norooziarab (Dassault Systemes Deutschland GmbH)
AbstractThe integration of automated workflows for coupled electromagnetic and structural simulations marks a transformative step forward in engineering analysis. This approach enables the seamless exchange of data between electromagnetic simulations and structural analyses, allowing engineers to evaluate how electromagnetic forces influence material deformation and structural integrity with greater precision and efficiency. By bridging these domains, this innovation addresses critical challenges across industries such as aerospace and defence, energy, industrial equipment, and automotive, where understanding the interaction between high-power electromagnetic systems and structural components is vital for ensuring performance, safety, and compliance. In sectors like electric vehicles, this capability supports optimization of system efficiency while meeting rigorous safety and regulatory standards. For consumer electronics and medical devices, it enhances the reliability of designs involving electromagnetic fields.Historically, coupling these two simulation domains has been a labor-intensive process requiring extensive manual effort. Initial implementations relied on the exchange of large volumes of data through text files and scripts, resulting in time delays, inefficiencies, and memory-related challenges. For example, simulating the mechanical impact of high-current electromagnetic force - such as Lorentz forces - on structures involved complex, error-prone steps to translate electromagnetic force data into input for structural solvers. These limitations hindered the ability to conduct high-fidelity simulations, often necessitating physical prototyping to address the gaps.The automated workflow eliminates these bottlenecks by streamlining the process of data transfer and interpretation, enabling more robust and accurate multiphysics simulations. By automating these steps, the workflow reduces resource consumption and minimizes errors, paving the way for faster and more reliable insights into the interplay between electromagnetic and structural phenomena. This advancement not only improves product development efficiency but also promotes innovation by providing a deeper understanding of complex physical interactions. The result is a powerful tool that helps engineers deliver safer, more efficient, and innovative solutions across diverse industries.
Authored & Presented By Astrid Walle (Siemens Energy Global)
DescriptionIn this talk, we will examine the readiness of the CAE universe for the adoption of foundational models, exploring what these models truly mean for engineering design and simulation. We will begin by outlining a vision for AI in engineering, questioning whether we can provide a base for foundational models given our current state in data handling and whether we can make the most of their transformative potential. We will discuss the requirements for implementing these models, such as simulation data management and computational capabilities. By assessing the status quo in research and development, we will highlight both the progress made and the hurdles that remain. While we’ve made significant progress with AI applications in CAE, we’re just scratching the surface of what foundational models can offer. One thing is certain: As we continue to explore their capabilities, exciting possibilities lie ahead!
BiographyAstrid Walle is a mechanical engineer with a PhD in CFD and more than a decade of experience in applied fluid mechanics. She has held several positions in gas turbine R&D and AI development at Siemens Energy, Vattenfall and Rolls Royce. Following her professional determination to bring AI and Data Science into engineering she ran her own business and worked as a Product Manager in a software startup before she rejoined Siemens Energy to establish the usage of data from the very beginning in product development.
Authored & Presented By Albrecht Pfaff (Consultant)
DescriptionMore details and panellists will be announced soon.
Authored & Presented By Manfred Zehn (Manfred W. Zehn)