|
[Sponsors] | |||||
They say the only constant in life is change and that’s as true for blogs as anything else. After almost a dozen years blogging here on WordPress.com as Another Fine Mesh, it’s time to move to a new home, the … Continue reading
The post Farewell, Another Fine Mesh. Hello, Cadence CFD Blog. first appeared on Another Fine Mesh.
Welcome to the 500th edition of This Week in CFD on the Another Fine Mesh blog. Over 12 years ago we decided to start blogging to connect with CFDers across teh interwebs. “Out-teach the competition” was the mantra. Almost immediately … Continue reading
The post This Week in CFD first appeared on Another Fine Mesh.
Automated design optimization is a key technology in the pursuit of more efficient engineering design. It supports the design engineer in finding better designs faster. A computerized approach that systematically searches the design space and provides feedback on many more … Continue reading
The post Create Better Designs Faster with Data Analysis for CFD – A Webinar on March 28th first appeared on Another Fine Mesh.
It’s nice to see a healthy set of events in the CFD news this week and I’d be remiss if I didn’t encourage you to register for CadenceCONNECT CFD on 19 April. And I don’t even mention the International Meshing … Continue reading
The post This Week in CFD first appeared on Another Fine Mesh.
Some very cool applications of CFD (like the one shown here) dominate this week’s CFD news including asteroid impacts, fish, and a mesh of a mesh. For those of you with access, NAFEM’s article 100 Years of CFD is worth … Continue reading
The post This Week in CFD first appeared on Another Fine Mesh.
This week’s aggregation of CFD bookmarks from around the internet clearly exhibits the quote attributed to Mark Twain, “I didn’t have time to write a short letter, so I wrote a long one instead.” Which makes no sense in this … Continue reading
The post This Week in CFD first appeared on Another Fine Mesh.
Frost forms hexagonal columns on a wooden rail in this microphotograph by Gregory B. Murray. Like in snowflakes, when water molecules freeze they position themselves to form six-sided crystals. From this perspective, it looks like a miniature version of the Giant’s Causeway. (Image credit: G. Murray; via Ars Technica)
Scientists have unveiled the sharpest images ever captured of a solar flare. Taken by the Inouye Solar Telescope, the image includes coronal loop strands as small as 48 kilometers wide and 21 kilometers thick–the smallest ones ever imaged. The width of the overall image is about 4 Earth diameters. The captured flare belongs to the most powerful class of flares, the X class. Catching such a strong flare under the perfect observation conditions is a wonderful stroke of luck.
Although astronomers had theorized that coronal loops included this fine-scale structure, the Inouye Solar Telescope is the first instrument with the resolution to directly observe structures of this size. Confirming their existence is a big step forward for those working to understand the details of our Sun. (Video and image credit: NSF/NSO/AURA; research credit: C. Tamburri et al.; via Gizmodo)
Off western Australian, hundreds of low-lying islands and coral reefs jut into the ocean as part of the Buccaneer Archipelago. Tides here have a range of nearly 12 meters, so water rips through the narrow channels as the tide ebbs and flows. These fast flows lift sediment that dyes the water a bright turquoise. (Image credit: M. Garrison; via NASA Earth Observatory)
Sea ice’s high reflectivity allows it to bounce solar rays away rather than absorb them, but melting ice exposes open waters, which are better at absorbing heat and thus lead to even more melting. To understand how changing sea ice affects climate, researchers need to tease out the mechanisms that affect sea ice over its lifetime. A new study does just that, showing that sea ice loses salt as it ages, in a process that makes it less porous.
Researchers built a tank that mimicked sea ice by holding one wall at a temperature below freezing and the opposite wall at a constant, above-freezing temperature. Over the first three days, ice formed rapidly on the cold wall. But it did not simply sit there, once formed. Instead, the researchers noticed the ice changing shape while maintaining the same average thickness. The ice got more transparent over time, too, indicating that it was losing its pores.

Looking closer, the team realized that the aging ice was slowly losing its salt. As the water froze, it pushed salt into liquid-filled pores in the ice. One wall of the pore was always colder than the others, causing ice to continue freezing there, while the opposite wall melted. Over time, this meant that every pore slowly migrated toward the warm side of the ice. Once the pore reached the surface, the briny liquid inside was released into the water and the ice left behind had one fewer pores. Repeated over and over, the ice eventually lost all its pores. (Image credit: T. Haaja; research credit and illustration: Y. Du et al.; via APS)

Although wind turbines can have any number of blades, most that we see have three. The reasons for that are many, as explained in this Minute Physics video. In terms of physics, wind turbines with more blades produce more torque, but they pay for it with more drag. Engineering-wise, wind turbines with odd numbers of blades have less uneven forces on them, and, thus, cost less. And, finally, people just prefer the look and sound of 3-bladed wind turbines over other forms! (Video and image credit: Minute Physics)
Photographer Daniele Borsari captured this gorgeous composite image of nebulas in black and white, emphasizing the motion underlying the gas and dust. In the upper right, the Orion Nebula shines, bright with new stars. In the lower left, you can pick out the distinctive shape of the Horsehead Nebula and, further to the left, the Flame Nebula. We often see nebulas in bright colors, but I love the way black and white highlights the turbulence surrounding them. (Image credit: D. Borsari/ZWOAPOTY; via Colossal)
|
Hi sakro,
Sadly my experience in this subject is very limited, but here are a few threads that might guide you in the right direction:
Best regards and good luck! Bruno |
<-- this dot is just a general sign for multiplication; both multiplication of scalars and scalar multiplication of vectors can be denoted by it; obviously, if I multiply vectors, I will denote them as vectors (i.e. with an arrow above), everything that doesn't have an arrow above is a scalar
and
are tangent and cotangent respectively
is logarithm with the base of 10
is natural logarithm
,
and
are all the same thing

is called diffusion velocity, see, e.g., general.H line 92
is called drift velocity, see, e.g., general.H line 63
is declared in the createFields.H file (see line 57), which is a part of interPhaseChangeFoam, and not the part of driftFluxFoam.Enhanced Reproducibility: Scripting captures the complete post-processing state, including probes and data settings, ensuring consistent and reproducible analysis results.
Launch Analysis with a Script: The post-processing process is initiated by running the `postrecord.py` script, which generates the necessary output and begins the analysis.
Reproduce Results with a File: The script generates a `results_analysis_state.py` file, which encapsulates a set of Python commands designed to replicate the post-processing setup. This ensures the reproducibility and consistency of output results across different simulation runs that utilize identical configurations.
Imagine simulating complex physics-based models involving turbomachinery, combustion, or aerodynamics, and spending hours fine-tuning the post-processing settings to gain valuable insights. With post-processing state recording, Fidelity users can preserve this state of analysis and reuse it for future simulations with equivalent configurations. This feature is particularly useful for:

To record the post-processing state, the Fidelity user runs the `postrecord.py` script. This script generates a `results_analysis_state.py` file in the project directory, which contains Python commands to fully reproduce the post-processing setup of the current simulation session.
The Fidelity user is now able to leverage the captured state of analysis to restore the results analysis environment to a previous state by executing the recorded script, thereby streamlining the analytical process.
Ensure the `postrecord.py` script is executed while the intended post-processing configuration is active.
The `results_analysis_state.py` file can be used to automate repetitive tasks and workflows.
Our recorded script feature not only enables seamless collaboration and knowledge sharing among team members but also enhances productivity by allowing teams to work together more efficiently. By recording and saving scripts, teams can easily share knowledge and best practices, reduce miscommunication, and onboard new team members more quickly. Experiment with different post-processing settings and record various states to create a library of reusable analysis templates.
By incorporating post-processing state recording into your Fidelity workflow, you'll streamline your results analysis process, improve consistency, and reduce manual setup time.
Figure 1: Structured hexahedral volute mesh using GridPro’s automation tool.
Word count: 1660
Discover how GridPro’s automation solution is revolutionizing volute mesh generation for turbochargers, pumps, and compressors. Explore the significance of volutes in engineering, the driving forces behind turbocharger design, and learn how structured meshes drive innovation. Streamline your engineering processes with GridPro’s innovative tool.
The shipping industry is propelled by a blend of economic, environmental, and regulatory pressures. Regulations such as the International Maritime Organization’s (IMO) MARPOL Annex VI, limit sulfur oxides (SOx) and nitrogen oxides (NOx) emissions from ships, compelling the industry to adopt cleaner, more efficient technologies.
Also, rising fuel costs and regulations like the Energy Efficiency Design Index (EEDI) mandate improved fuel efficiency in ship designs. Moreover, the quest for operational efficiency and competitive advantage fuels innovation, prompting shipbuilders and operators to seek cutting-edge solutions.

The need for efficient, powerful, and environmentally friendly ship engines is significantly influencing the design and development of turbochargers. Turbochargers, essential for improving engine efficiency and reducing emissions, must evolve to meet these new challenges. They need to be more durable, efficient, and environmentally friendly, pushing engineers to innovate continually.
One key area of improvement in turbocharger performance is the design of the volute. The volute, a spiral casing that guides exhaust gases into the turbine, plays a crucial role in the turbocharger’s efficiency. Optimizing the shape and size of the volute can lead to significant performance gains, including higher pressure ratios, improved airflow, and reduced energy losses. This translates into tangible benefits for ship operators.
CFD plays a pivotal role in the design and development of turbochargers. CFD allows engineers to simulate and analyze fluid flow, heat transfer, and other physical phenomena within the turbocharger, providing insights that are impossible to obtain through traditional testing methods. It enables to identify performance bottlenecks, optimize the shape and size of volutes and other components and explore novel design concepts without extensive physical prototyping. This accelerates the development process and leads to more effective and efficient turbocharger designs.
To further boost the design and development of turbochargers, GridPro is introducing the volute mesh automation. Utilizing advanced meshing algorithms, it creates precise, high-quality meshes for CFD simulations, reducing manual effort significantly. This automation ensures consistent mesh quality, facilitating accurate CFD analyses. Engineers can iterate on volute designs rapidly, optimizing performance and accelerating the overall development of turbochargers and volutes. This ultimately results in better-performing turbochargers tailored to meet the stringent demands of the shipping industry.
The volute in a turbocharger plays a crucial role by guiding exhaust gas flow from the engine towards the turbine blades, where the exhaust gas’s energy is converted into mechanical energy to drive the compressor. This function is vital, as the volute’s design significantly impacts the efficiency and flow characteristics of the turbine under various operating conditions.

A well-designed volute with an optimal cross-sectional shape is essential for providing uniform flow to the rotor at the desired angle. This uniformity maximizes energy recovery and enhances the efficiency of the turbocharger turbine. The cross-sectional shape of the volute directly influences the direction and magnitude of the flow at the turbine rotor inlet, affecting the overall efficiency of the turbocharger. An optimized volute design can lead to improved cycle-averaged efficiency, especially under the pulsating flow conditions typical of internal combustion engine exhausts. Enhanced efficiency results in better energy recovery from the exhaust gas, thereby increasing the engine’s power density.
Moreover, the cross-sectional shape of the volute impacts secondary flow patterns and the development of vortices within the volute. For instance, a volute designed to produce smaller vortices will exhibit faster response times and superior performance under pulsating conditions compared to one with larger vortices, which have more inertia and respond more slowly.
Different volute designs can lead to varying levels of total pressure loss and flow distortion. A volute with a sharper corner and flatter cross-sectional shape can enhance secondary flow development, resulting in higher pressure losses and more distorted flow at the rotor inlet, which deteriorates the turbine’s performance. Conversely, optimized volute shapes can reduce these losses and improve flow uniformity, contributing to better overall performance.

Additionally, the volute’s design determines its sensitivity to the pulsating nature of the exhaust flow. A well-designed volute can maintain a more stable and predictable flow pattern even under unsteady conditions, which is crucial for maintaining high efficiency and performance in real-world operating conditions of internal combustion engines.
Basically, the volute is a critical component in a turbocharger that significantly affects its performance by influencing flow patterns, efficiency, pressure losses, and sensitivity to pulsating flow conditions. Optimizing the volute design can lead to marked improvements in turbocharger efficiency, power density, and overall engine performance.
The evolution of turbocharger design is steered by a multitude of factors, each contributing to the relentless pursuit of innovation and efficiency across the automotive, shipping, and aerospace industries.
A primary force behind this evolution is the increasingly stringent emissions standards worldwide. Manufacturers are under pressure to develop turbochargers that enhance engine thermal efficiency and significantly reduce CO2, NOx, and particulate matter emissions. Turbocharging plays a pivotal role in meeting these regulatory requirements by improving combustion efficiency.
Concurrently, the growing emphasis on fuel economy and sustainability is pushing turbocharger designs to focus on enhancing engine efficiency. The goal is to increase power output without a significant rise in fuel consumption.

This is particularly important as the trend towards downsizing engines continues to gain traction. By reducing engine size while maintaining or improving performance, turbochargers enable smaller engines with reduced displacement to produce the same or higher power outputs. This is achieved by increasing the air intake pressure, thereby enhancing volumetric efficiency. When downsizing is combined with downspeeding—operating at lower engine speeds—fuel consumption is further reduced, and vehicle weight is minimized.
Despite these advancements, traditional turbochargers often suffer from slow transient response, which negatively impacts vehicle drivability and acceleration. To address this issue, innovations such as electrically-assisted turbochargers are being developed. These new designs improve response time without causing parasitic losses to the engine, a crucial improvement for maintaining the performance and attractiveness of turbocharged vehicles.
Intense competition in the automotive industry further motivates manufacturers to continuously innovate turbocharger designs, striving to stay ahead in terms of performance, efficiency, and reliability. This competitive drive is closely linked to the need to meet specific market demands and customer expectations. As a result, there is a strong focus on developing turbochargers that offer better drivability, fuel efficiency, reduced turbo-lag, and overall improved vehicle performance.
Technological advancements play a significant role in this ongoing evolution. Progress in materials, manufacturing processes, and computational fluid dynamics (CFD) has enabled the development of more efficient and responsive turbochargers. Innovations in materials and structural design contribute to the longevity and reliability of turbochargers, allowing them to withstand high temperatures and high-stress conditions.
In essence, turbocharger design refinement is a dynamic interplay of regulatory pressures, performance demands, technological innovations, market dynamics, and environmental consciousness, all converging to shape the future of propulsion engines.
Structured meshes play a pivotal role in propelling the enhancement of volutes and turbochargers, influencing various factors driving design improvements. Firstly, they elevate the accuracy of simulation results, enabling precise predictions of fluid flow behaviours like pressure distributions and velocity profiles. This insight aids engineers in pinpointing areas for enhancement and refining design parameters.
Secondly, structured meshes deepen the comprehension of flow physics within these components, identifying phenomena like flow separation and vortices. This comprehension inspires innovative design concepts and optimization strategies geared towards boosting performance and efficiency.

Moreover, structured meshes facilitate parametric studies, allowing engineers to systematically optimize geometric parameters while maintaining mesh quality. This exploration of the design space leads to the discovery of optimal configurations aligned with performance objectives.
Additionally, these meshes aid in evaluating aero-thermal performance and mitigating flow instabilities, contributing to more robust and reliable designs. They also support the validation of design concepts by providing accurate predictions for comparison with experimental data, reducing development time and accelerating innovation.
In essence, structured meshes serve as the foundation for accurate simulations, fostering deeper understanding, optimization, and validation processes that collectively drive advancements in volute and turbocharger designs.

Traditionally, the process of generating high-quality meshes for volutes has been labour-intensive and time-consuming, requiring manual intervention and expertise in meshing software. However, with the unveiling of GridPro’s latest innovation, this cumbersome process is now a relic of the past.
GridPro has introduced an automation tool designed to effortlessly generate topology and mesh for volute geometries. Through its intuitive workflow and robust meshing algorithms, GridPro streamlines the mesh generation process. The algorithm seamlessly generates topology and meshes on volutes with unparalleled efficiency and precision.
Whether it’s creating structured meshes for volutes with intricate geometries or optimizing mesh density for turbocharger simulations, GridPro empowers engineers to achieve superior results with minimal computational overhead.

GridPro’s automated solution for volute geometries represents more than just a technological advancement; it embodies a paradigm shift in engineering design and simulation. By harnessing the power of automation, engineers can transcend the limitations of manual mesh generation, unlocking new possibilities in product development and optimization.
Gone are the days of laborious meshing processes and tedious iterations. With GridPro’s innovation as their ally, engineers can embrace a future of seamless design, where creativity and efficiency converge to propel projects forward.
Ready to Automate Your Meshing Workflow?
Gridpro Xpress Volute
GridPro’s intelligent structured meshing automation solution reduces manual effort and maximizes accuracy—making it ideal for design optimization in CFD.
Schedule a free demo or contact us to see how GridPro can accelerate your simulation pipeline.
In conclusion, GridPro’s automated solution for volute geometries heralds a new era of efficiency and productivity in engineering design. By streamlining the volute mesh generation process and eliminating manual labour, the tool empowers engineers to focus their expertise and creativity on solving complex challenges and driving innovation forward.
As the demands of modern engineering continue to evolve, GridPro remains at the forefront of technological innovation, delivering solutions that redefine the boundaries of possibility. With GridPro’s automation tool, the journey from concept to realization becomes smoother, faster, and more rewarding than ever before.
1. Discover GridPro Xpress Volute
2. “An investigation of volute cross-sectional shape on turbocharger turbine under pulsating conditions in internal combustion engine”, Mingyang Yang et al, Energy Conversion and Management 105 (2015) 167–177.
3. “The impact of volute aspect ratio and tilt on the performance of a mixed flow turbine”, Samuel P Lee et al, Proc IMechE Part A: J Power and Energy 2021, Vol. 235(6) 1435–1450.
4. “Unsteady behaviours of a volute in turbocharger turbine under pulsating conditions”, Mingyang Yang et al, J. Glob. Power Propuls. Soc. | 2017, 1: 237–251.
5. “The Effect of Volute Design On The Performance Of A Turbocharger Compressor”, A. Whitfield et al, International Compressor Engineering Conference. Paper 1501.
6. “Important Considerations When Designing a Volute”, an article by Jamin Bitter.
7. “How Turbocharger Design is Changing as Car Firms Chase Efficiency”, an article in the website secotools.com
8. “ Turbochargers for higher engine efficiency”, article by Lucie Maluck.
9. “Downsized, boosted gasoline engines”, Aaron Isenstadt and John German (ICCT) et al, INTERNATIONAL COUNCIL ON CLEAN TRANSPORTATION, 2016.
10. “Variable Geometry Turbocharger Technologies for Exhaust Energy Recovery and Boosting‐A Review”, Adam J. Feneley et al, Renewable and Sustainable Energy Reviews 71 (2017) 959–975.
11. “Multi-objective optimization of turbocharger turbines for low carbon vehicles using meanline and neural network models”, Prakhar Kapoor et al, Energy Conversion and Management: X 15 (2022) 100261.
12. “A Review of Novel Turbocharger Concepts for Enhancements in Energy Efficiency”, A. Kusztelan et al, Int. J. of Thermal & Environmental Engineering Volume 2, No. 2 (2011) 75-82.
13. “Electric Turbocharging for Energy Regeneration and Increased Efficiency at Real Driving Conditions”, Pavlos Dimitriou et al, Appl. Sci. 2017, 7, 350; doi:10.3390/app7040350.
By subscribing, you'll receive every new post in your inbox. Awesome!
The post Automatic Structured Hexahedral Meshes for Volutes appeared first on GridPro Blog.
Figure 1: Structured multi-block meshing of a heat pump compressor.
1209 words / 6 minutes read
Shifting to low-GWP refrigerants is reshaping centrifugal compressor design for heat pumps. This blog reveals how structured hexahedral meshing with GridPro empowers CFD engineers to tackle real-gas effects, optimize performance, and accelerate innovation—ensuring sustainable, high-efficiency compressors that meet the challenges of a changing HVAC landscape.
As global energy policies tighten and environmental awareness rises, the HVAC and energy industries are shifting toward low-Global Warming Potential (GWP) refrigerants. In this evolving landscape, centrifugal compressors used in heat pumps are undergoing critical redesign. These systems must now meet higher performance and sustainability standards while accommodating complex thermodynamic behaviors. CFD engineers and R&D managers play a pivotal role in ensuring these compressors adapt to new refrigerants without compromising efficiency or reliability. Structured meshing, particularly hexahedral multiblock techniques, is emerging as a powerful enabler of accurate CFD simulations tailored to this transformation.
The push to replace high-GWP refrigerants like R134a is driven by both regulatory mandates and sustainability goals. New options such as R1234yf, R1234ze(E), CO₂, and ammonia offer significantly lower environmental impact. However, these alternatives introduce design complications, including altered thermophysical properties, higher pressures, and non-ideal fluid behaviors. Transitioning to these refrigerants isn’t as simple as swapping fluids; it requires a re-evaluation of compressor design to ensure optimal thermal and mechanical performance.
CFD simulations play a central role in assessing how different refrigerants impact efficiency, mass flow rate, pressure ratio, and power requirements. These metrics are influenced by a refrigerant’s specific heat ratio, compressibility, and speed of sound. For instance, R1234ze(E), with a lower speed of sound than R134a, can result in higher pressure ratios at the same rotational speeds but may require adjustments to blade angle and diffuser geometry to maintain performance. These evaluations are only reliable when supported by high-fidelity meshes that resolve critical features of the compressor flow path.

Accurate CFD modeling enables engineers to simulate real-world operation and iterate compressor designs quickly. With low-GWP refrigerants, real gas effects become prominent and must be incorporated using equations of state like Peng-Robinson or Redlich-Kwong. Property libraries such as NIST REFPROP or CoolProp are commonly integrated into CFD workflows to provide refrigerant-specific data.
CFD also supports flow visualization and loss analysis, helping engineers refine impeller and diffuser shapes to reduce aerodynamic losses. Identifying zones of high entropy generation, flow recirculation, or separation enables targeted geometric modifications. For example, blade tip leakage losses and flow detachment in the diffuser region are critical for compressors using refrigerants like CO₂ or hydrocarbons operating at high pressures. Well-resolved CFD studies guide designers in mitigating these effects through blade curvature optimization, hub contouring, or diffuser vane adjustments.
Moreover, system-level performance can be predicted through CFD results linked with thermodynamic cycle simulators. This integration allows teams to evaluate how compressor performance translates into system COP under variable operating conditions. Such an approach provides a holistic view and supports informed decision-making on refrigerant choice and component sizing.
In the CFD workflow, mesh generation directly influences the accuracy, stability, and convergence of the simulation. In centrifugal compressors, flow behavior is strongly affected by complex geometry and rotating machinery effects. Components such as impellers with main and splitter blades, tight tip clearances, and curved volute channels introduce abrupt velocity gradients, secondary flows, and shocks.
Mesh resolution must be high enough to capture boundary layers, pressure gradients, and thermal interactions. Especially in wall-bounded regions, achieving target y+ values (typically <1) is necessary to ensure compatibility with turbulence models like k-ω SST. This is vital when simulating flow separation or heat transfer within the volute or impeller shroud.
Mesh quality is equally important when working with real gas models. Rapid property changes due to pressure and temperature fluctuations can lead to convergence issues if the mesh is distorted or insufficiently refined. A mesh with good orthogonality, low skewness, and gradual stretching supports robust simulations even under these challenging conditions.
A well-executed grid independence study further enhances credibility by verifying that simulation outputs remain consistent across different mesh densities. Balancing computational cost and accuracy, such studies help teams standardize mesh sizes while maintaining trust in the results.
Structured hexahedral meshes are ideal for turbomachinery simulations because they offer higher numerical accuracy and control than unstructured meshes. By aligning elements with the main flow direction, they minimize interpolation errors and numerical diffusion, which is particularly advantageous in high-gradient regions near blade surfaces.
They also facilitate cleaner layering near walls, enabling more reliable use of wall-resolved turbulence models. This becomes especially important when analyzing centrifugal compressors operating under transonic or off-design conditions, where minor differences in wall shear can influence performance and efficiency.
In post-processing, structured meshes allow engineers to interpret simulation results more clearly. Streamlines, pressure contours, and velocity vectors derived from well-ordered grids yield more consistent visualizations, helping teams identify flow anomalies and validate design improvements. The predictability and stability of structured meshes also reduce solver crashes and improve convergence speed—benefits that accumulate over repeated design cycles.

GridPro provides a specialized platform for structured hexahedral mesh generation, optimized for complex geometries like those in centrifugal compressors. Its topology-based approach allows engineers to define reusable block templates that can be adapted to different impeller shapes, diffuser configurations, or refrigerant conditions. This flexibility accelerates geometry-to-mesh workflows, making it easier to manage design iterations.
One of GridPro’s key strengths lies in boundary layer control. With fine resolution settings, engineers can maintain strict y+ targets while smoothly transitioning from near-wall elements to the outer domain. This is particularly useful when working with turbulence models and wall heat transfer, both of which are critical for compressors handling refrigerants with large thermal gradients.
GridPro also supports wake refinement and shock-fitting capabilities. These features are essential for accurately capturing flow structures behind blade trailing edges and in regions of sudden expansion or compression. For example, in compressors operating with high-pressure refrigerants, these mesh refinements help capture oblique shocks and shear layers without excessive numerical dissipation.
GridPro also offers an automation solution specifically valuable for centrifugal compressor design: GridPro Xpress Blade. This tool enables automatic generation of structured multiblock meshes for impeller blades, streamlining the creation of high-quality meshes that align closely with blade geometry. Xpress Blade is programmed to produce solver-ready meshes with minimal manual input. For engineers performing iterative simulations across varying blade profiles or refrigerants, this tool significantly shortens meshing time without compromising grid fidelity. Its ability to consistently generate mesh blocks around blades, splitters, and trailing edge regions enhances wake capture and overall mesh convergence. As a result, Xpress Blade helps integrate mesh generation seamlessly into automated design and optimization workflows.
GridPro meshes are compatible with major CFD solvers like ANSYS CFX, Fluent, OpenFOAM, and STAR-CCM+, which streamlines downstream simulation efforts. Engineers can also incorporate GridPro meshes into automated parametric studies and optimization frameworks using Python or third-party integration tools, ensuring scalability across projects.

The transition to low-GWP refrigerants in centrifugal compressor applications brings with it a set of complex engineering challenges. Meeting performance goals while ensuring sustainability and compliance requires a deep integration of CFD, real gas modeling, and high-quality structured meshing.
Structured hexahedral meshes—and specialized tools such as Xpress Blade—provide the fidelity and flexibility necessary to simulate and optimize modern compressor designs. For engineers and R&D leaders in the heat pump sector, investing in robust meshing strategies is a foundational step toward reliable, efficient, and future-ready product development.
1. “Centrifugal compressor design and cycle analysis of large-scale high temperature heat pumps using hydrocarbons“, Antti Uusitalo et al, Applied Thermal Engineering 247 (2024) 123035.
2. “Design and CFD analysis of centrifugal compressor and turbine for supercritical CO2 power cycle“, Ashish Chaudhary et al, The 6th International Symposium-Supercritical CO2 Power Cycles, March 27-29, 2018, Pittsburgh, PA.
3. “DESIGN AND OPERATION OF A CENTRIFUGAL COMPRESSOR IN A HIGH TEMPERATURE HEAT PUMP“,
Benoît Obert et al, 5th International Seminar on ORC Power Systems, September 9 – 11, 2019, Athens, Greece.
4. ” Combining Thermodynamics-based Model of the Centrifugal Compressors and Active Machine Learning for Enhanced Industrial Design Optimization“, Shadi Ghiasi et al, 1st workshop on Synergy of
Scientific and Machine Learning Modeling, SynS & ML ICML, Honolulu, Hawaii, USA. July, 2023.
5. ” Study of Performance Changes in Centrifugal Compressors Working in Different Refrigerants“, YintaoWang et al, Energies 2024, 17, 2784.
6. “Design of centrifugal compressors for heat pump systems“, Meroni, Andrea et al, Applied Energy, 232, 139-156.
7. “The Characteristic of High-Speed Centrifugal Refrigeration Compressor with Different Refrigerants via CFD Simulation“, Kuo-Shu Hung et al, Processes 2022, 10, 928.
8. “Energy Characteristics of the Compressor in a Heat Pump Based on Energy Conversion Theory“, Yingju Pei et al, Processes 2025, 13, 471.
9. “CFD Simulation of a Centrifugal Compressor using Star-CCM+“, SAI ANIRUDH RAVICHANDRAN, Master’s thesis in Applied Mechanics, CHALMERS UNIVERSITY OF TECHNOLOGY, Göteborg, Sweden 2022.
10. “Design of the first stage of a centrifugal compressor with R1234ze(E) for heat pump in district heating“, Fois, Antonio, THRUST Master of Science, Master Thesis Project Report, 30 CET, Universitè de Liege Faculty of Applied Sciences Academic year 20202021.
By subscribing, you'll receive every new post in your inbox. Awesome!
The post From Grid to Green: Hexahedral Meshing for Low-GWP Centrifugal Compressor Designs in Heat Pumps appeared first on GridPro Blog.
Figure 1: Space debris research: Structured multi-block mesh for a satellite. Image source – Mesh was generated by our French Distributor – R.Tech.
1581 words / 8 minutes read
What happens when space junk falls back to Earth—and how can we predict the impact before it’s too late?
With thousands of defunct satellites and rocket fragments orbiting Earth, space debris poses a serious threat. This article uncovers how cutting-edge CFD simulations and intelligent meshing strategies are being used to predict the reentry behavior of this debris—helping prevent disasters and protect both space assets and life on Earth.
As space activity intensifies, Earth’s orbit is becoming increasingly cluttered with defunct satellites, spent rocket stages, and mission-related fragments—collectively referred to as space debris. These objects, once they complete their orbital life, often re-enter the atmosphere in unpredictable and dangerous ways. Understanding how this debris behaves during atmospheric reentry is critical for safeguarding both space assets and lives on the ground.
Computational Fluid Dynamics (CFD) has become a vital tool for simulating the complex flow and thermal environments experienced by these objects. However, the challenges of modeling irregular debris shapes, rapidly changing geometries, and dynamic trajectories require not only robust simulation techniques but also intelligent meshing strategies.
This article explores the need for space debris research, the role of CFD in this domain, the challenges it entails, and how tools like GridPro help address the meshing demands essential to such high-fidelity simulations.
Research on space debris has gained urgency due to the growing threat it poses to satellite operations and public safety. As the number of man-made objects in orbit increases, so does the risk of collision and uncontrolled atmospheric reentry. In a worst-case scenario, known as the Kessler syndrome, a cascade of collisions could render certain orbits unusable. Moreover, as more debris is projected to re-enter the atmosphere in the coming years, predicting which objects will burn up and which might survive to reach Earth’s surface has become a major concern.
International guidelines, such as those from NASA’s Orbital Debris Program Office, stipulate that re-entering debris should pose no more than a 1 in 10,000 chance of causing harm on the ground. Current predictive models often fall short of this accuracy. Many use simplified geometries and outdated correlation models that underestimate heat rates or overestimate drag, resulting in uncertain survivability predictions.
To address these limitations, researchers are developing new methodologies that combine automated CFD computations, normalization techniques, and machine learning to create more reliable and comprehensive tools for assessing reentry risks.

CFD plays a foundational role in space debris research by providing high-fidelity data on aerodynamic characteristics and heat rates. This information is crucial for determining how a piece of debris will behave during atmospheric reentry, including its trajectory, velocity, angle of impact, and potential for ground damage. Traditional models, such as modified Newtonian theory, often fall short in accurately capturing complex flow phenomena, especially around concave or irregular geometries. CFD offers a superior alternative by simulating these intricate interactions with a high level of detail.
Modern CFD methods are capable of handling thousands of simulations across a wide array of shapes and flow conditions. These simulations often account for phenomena such as shock interactions, random tumbling motions, changes in wall temperature, and geometry transformations due to ablation.
Many CFD solvers solve full 3D Navier-Stokes or Euler equations and can model thermochemical non-equilibrium gas compositions typical of high-altitude reentry scenarios. Databases of non-dimensional parameters, such as drag coefficients and shape factors, are generated to aid in faster yet accurate risk assessments. CFD results are then validated using experimental data from hypersonic wind tunnels and free-flight testing, providing critical input for improving certification tools.
Despite its advantages, the use of CFD in space debris simulations is not without significant hurdles. High-fidelity simulations are computationally intensive. A single simulation involving a six-degree-of-freedom model can take anywhere from 30 to 60 CPU-hours, making large-scale probabilistic assessments impractical using conventional approaches.
The tumbling nature of debris during atmospheric reentry introduces another layer of complexity, as the aerodynamic response varies significantly with object orientation. This requires simulations to be performed across numerous attitudes to capture an accurate average response.
Moreover, the diversity in debris shapes—from hollow hemispheres to irregular fragments—poses a challenge for both modeling and simulation. These objects may undergo ablation, changing their geometry mid-flight, which complicates the simulation further. Accurately capturing the interaction between the flow and these complex surfaces necessitates high-quality meshes.
Simulating thermal behavior, shock-shock interactions, and catalycity adds even more to the computational burden. To manage this, researchers are increasingly relying on normalized databases and advanced interpolation methods, which allow the reuse of CFD results across different scenarios without rerunning the entire simulation set.

Generating accurate and efficient meshes is one of the most challenging aspects of CFD simulations for space debris. Debris objects are often irregular, with sharp edges, cavities, or thin structures that demand high mesh resolution to capture essential features. However, increasing mesh resolution significantly raises computational costs and can make subsequent data-driven models, such as neural networks, too complex to be practical. Striking the right balance between detail and efficiency is crucial.
Another challenge lies in the need to maintain a static mesh structure when using deep learning models. Once a mesh is created for a specific object, it cannot be easily modified to represent different sizes or shapes without disrupting the model’s structure. This limitation becomes particularly problematic when trying to simulate objects that undergo shape changes during ablation.
Furthermore, mesh quality must be high enough to ensure convergence of the numerical methods used in CFD, especially in regions with strong gradients such as heatshield shoulders. Under-meshing in these regions can lead to inaccurate predictions of temperature and pressure distributions, potentially compromising the entire simulation.
To conduct meaningful CFD simulations on space debris, the mesh must meet several critical requirements. It needs to accurately capture the geometry of complex shapes and resolve important flow features such as shock interactions, recirculation zones, and expansion fans. The type of mesh used can vary depending on the simulation method. Unstructured meshes, are favoured for their flexibility and local control over mesh density. Cartesian meshes, valued for their fast generation time and compatibility with automated simulations, are also widely used.
However, structured meshes—particularly multi-block structured meshes—are gaining popularity due to their ability to deliver high accuracy with fewer cells. These meshes are easier to validate for grid convergence and allow for efficient simulation across varying angles of attack using techniques like the rotating mesh approach. For scenarios involving machine learning, a single mesh is often used for all possible orientations, with the outer boundaries typically designed as spheres to ensure accurate wake modeling. Reusability is another key factor; once a high-quality grid topology is established, it can be reused for similar shapes, significantly reducing meshing time for future simulations.

Structured meshes offer several advantages that make them highly suitable for space debris CFD simulations. They enable the efficient resolution of complex flow features with fewer computational resources by maintaining a uniform grid quality. This type of mesh ensure high-quality results with a minimum number of cells. The block-based nature of these meshes allows them to adapt to various shapes without losing accuracy, making them ideal for modeling irregular debris geometries.
Structured meshes also support innovative modeling techniques such as the rotating mesh approach, where a single topology can be used to simulate various orientations of a tumbling object. This eliminates the need to generate new meshes for each attitude and help to automate the database generation from CFD computations. Overall, structured meshes strike a desirable balance between precision, flexibility, and computational efficiency, making them invaluable in the context of space debris modeling.
GridPro plays a crucial role in facilitating high-quality CFD simulations of space debris by streamlining the meshing process and enhancing the overall efficiency of simulations. Its ability to generate massively multi-block structured meshes allows for the precise modeling of complex and irregular geometries commonly found in space debris. The grids produced are of consistently high quality, which is essential for ensuring numerical convergence and accurate resolution of physical phenomena during reentry.
One of the standout features of GridPro is its support for the rotating mesh approach. This enables researchers to simulate a complete range of angles of attack using a single mesh, significantly reducing the time and effort required to prepare for each new simulation scenario. Additionally, GridPro’s reusability feature allows users to apply the same grid topology to multiple objects with similar geometric layouts, further enhancing efficiency and consistency across simulations.
The block structure of the meshes also supports effective parallelization, making it easier to deploy simulations on high-performance computing clusters. This capability is particularly valuable when running hundreds or thousands of simulations for probabilistic analysis or database generation. Overall, GridPro acts as a critical enabler in the CFD workflow for space debris research, bridging the gap between geometric complexity and computational feasibility.
Space debris reentry poses significant risks that demand accurate and efficient predictive tools. CFD has proven to be a powerful method for capturing the complex aerodynamics and thermal behavior of re-entering objects, but the success of these simulations hinges on the quality and structure of the underlying mesh. Structured meshes, especially those generated with tools like GridPro, offer the precision, adaptability, and computational efficiency necessary for tackling the unique challenges presented by space debris.
As research continues to evolve, combining high-fidelity CFD data with advanced meshing strategies and machine learning will be essential for developing the next generation of risk assessment and mitigation tools. By doing so, the scientific community takes a critical step toward ensuring safer and more sustainable use of Earth’s orbital environment.
This article is an outcome of the extensive work done on Space Debris by our French Distributor – R.Tech. We thank them for their valuable contribution to this article.
By subscribing, you'll receive every new post in your inbox. Awesome!
The post Unraveling Space Debris Reentry with CFD and Structured Meshing appeared first on GridPro Blog.
Figure 1: Structured multi-block meshing of a volute.
2200 words / 11 minutes read
With the increase in stricter pollution policies, eco-conscious customers, the demand for low carbon emission and energy-saving vehicles is steadily growing. With electric vehicles been the rage of town, the fossil fuel-driven automotive industry has upped its ante by building highly efficient and downsized engines. This has been made possible by the improvements in turbochargers design. Turbochargers enable a significant reduction in internal combustion engine size, reduce fuel consumption and emission levels. Further, they also increase the engine rate, the limiting torque curve, and the torque back-up.
Over the years, the understanding of the flow field and design of turbochargers has improved considerably. A major part of the industrial effort has been to increase the performance and reduction of losses in the impellers and vaned diffusers of the compressor and the turbine in turbochargers. However, insufficient attention has been paid to the flow diffusion in the compressor volutes, the losses incurred, and the influence of volutes on the overall performance of the compressors.
With the latest trend for compact engines and installation constraints, manufacturers are forced to reduce the turbocharger size. This means compact volute designs with the flow leaving the volute exit with considerable kinetic energy. Hence a more careful evaluation of the flow diffusion, ways to reduce losses, and improve performance by understanding the influence of geometric parameters in volutes is certainly needed.
The flows inside volutes are complex and multiple parameters influence them. Hence, it is no wonder that there is little consensus among designers as to what an optimized volute geometry looks like. Adding to this lack of clarity, volutes designed by different approaches give different pressure ratios and efficiency.
Extensive research work has gone into understanding and improving impellers and vaned diffusers in compressors and turbines. On the contrary, volutes are the least investigated and less understood component. Yet this part plays an important role in the compressor’s functioning.
The volutes strongly influence the compressor’s overall performance, stability limits, operating range, and pressure distortion at off-design conditions. Further, rather than the impeller, volutes determine the location of the point of best efficiency of the compressor.
Small changes in volute design can have a significant global impact on the performance of the compressor. For example, shortening of the volute tongue can impact the volute performance which in turn influences the global performance of the radial compressor. Or for that matter, the dilemma w.r.t the choice of volute geometry – symmetrical or overhung, makes design decisions perplexing. These design issues are just the tip of the iceberg, one will confront while designing volutes.
Hence, it is critically important to understand the influence of design parameters on volute and compressor performance. Also, even though the loss in volutes is less telling than that in the impeller or vaned diffusers, the potential for improvement is still sizable. Any small reduction in losses by design modifications does make a meaningful impact.
Lastly, flows in turbocharger centrifugal compressors by definition are non-axisymmetric due to the asymmetric nature of volute geometry, especially in off-design conditions. This results in the generation of pressure distortions in a circumferential direction which worsens the compressor’s stability and performance. This is a critical issue and it needs some serious attention.
Knowing the current trend in internal combustion engine downsizing, the stability of turbocharger centrifugal compressors is a major worrying issue. Since these pressure distortions are produced in the volutes, its design has lately gained larger attention as it considerably affects turbocharger performance.
Volutes, for smaller mass flow rates, acts as a diffuser. This causes a rise in static pressure from the tongue to the volute exit. However, at larger mass flow rates, the volute becomes too small and the flow accelerates from the tongue to volute exit.
Back pressure disturbances develop in the volute more specifically at the tongue region, which propagates upstream and influences the flow at the diffuser and impeller exit. This results in extra losses and pressure distortions around the impeller periphery. Further, these pressure distortions reduce the stage performance and have a direct impact on the impeller and diffuser flow stability.
An offset of the development of circumferential pressure distortion is the creation of radial forces on the impeller shaft, which sometimes lead to failure of the shaft bearings. Also, the circumferential non-uniformity of the flow at the impeller exit causes mixing losses in the diffuser. And lastly, this cyclic variation in the impeller channels at each rotation results in additional energy dissipation.
The repercussions of these pressure fluctuations caused by the volute-impeller interaction can be felt through elevated levels of noise and vibration. This is especially so near the tongue region. Hence, understanding the flow field distortion and ways to modify the volute design to reduce this distortion is of critical importance.

A number of volute’s geometrical parameters influence the compressor’s performance, stability, and operating range. Out of an array of parameters, 5 parameters namely, cross-sectional area distribution, cross-sectional shape, the radial position of cross-section, location of volute inlet, and tongue geometry, have been recognized by many researchers as the major influential ones.
These parameters are related to the flow characteristics and losses inside the volute and hence directly impact the overall compressor performance. The following sections elaborate on these geometrical parameters in greater detail.
Studies have shown that volutes with cross-sectional area increasing circumferentially display better efficiencies and pressure ratios when compared to constant area cross-sections. In particular, linear increase in area provides the best head and efficiency. This is because volutes with increasing area produce uniform pressure distribution at design conditions. However, at off-design conditions, large pressure distortions are observed.
At low flow rates, the volute cross-sectional area is large. As a consequence, the flow initially decelerates causing a rise in static pressure, but later at the tongue, the pressure drops suddenly. On the other hand, at very high flow rates, the volute area is too small which causes a decrease in pressure as the fluid accelerates in the circumferential direction, but later at the tongue, the pressure suddenly increases.

Moving further, if we just consider the change in cross-sectional area, studies reveal that at low flow rates, a larger volute cross-sectional area significantly decreases the maximum rise in pressure coefficient but increases the maximum flow coefficient of the compressor. Further, the maximum efficiency reduces by up to 2% with the increase in area, and it gets shifted to higher flow rates.
The shape of the cross-section has an influence on volute losses. On closer observation, it is seen that modification in cross-section shape affects the operating range more than the peak efficiency. Volutes with various shapes including circular, semi-circular, elliptic, rectangular, and square have been experimented with and their influence closely assessed.

Out of these different cross-sections, volutes with circular cross-sections exhibit lower wall friction and mixing losses as they have a smaller wetted area. Also, it is observed that, with circular cross-sections, it is possible to eliminate the secondary vortices developing inside the volute.
Volutes with square or rectangular sections being easy to manufacture are often used. However, volutes with square cross-sections are worst than circular ones as they have increased flow losses. For the same reason, the rectangular sections are inferior compared to square cross-sections.
Studies experimenting with volute inlet location have shown that tangential inlets ( Figure 5b ) are more efficient compared to symmetric volutes ( Figure 5a). It is observed that asymmetrical shapes provide a larger stable operating range, higher mass flow rate, and higher pressure coefficient.

The reason for this is, tangential inlets produce a single vortex while symmetric volutes produce a double vortex structure. When there is a double vortex, the distance between the opposite flow direction is reduced and the radial velocity gradients also increase close to the diffuser outlet. Both these effects result in an increase of shear stress and hence higher losses in symmetrical volutes.

Variation in the radial position of the volute channel results in an increase or decrease in losses thereby influences the compressor performance. From the principles of conservation of angular momentum, tangential velocity in a swirling flow is inversely proportional to the radius. If in a volute, the volute channel is positioned above the diffuser at a radius smaller than the diffuser outlet, then the tangential velocity inside the volute channel is higher than that at the diffuser outlet. This results in additional losses and undesired static pressure drop.
If we keep the cross-sectional shape and circumferential variation of the cross-sectional area constant and only increase the volute channel to a larger radius, a reduction in loss coefficient of up to 30% is observed.

On the other hand, for an internal volute with a cross-section radius smaller than the diffuser exit radius, a high loss-coefficient is observed for the entire operating range. In fact, the losses are very high as in collectors.
Further, it is noticed that a small variation in the radial position of the volute tongue w.r.t exit diffuser cone has an impact on the global efficiency of the volute. An increase in the radial position of the volute tongue increases efficiency.
Volute positional variations in the axial direction are usually presented as symmetric volutes and overhung volutes. For applications like the aerospace field where installation constraints such as space and weight are high, overhung volutes are employed.

Experiments and CFD simulations have shown that asymmetric volutes exhibit higher efficiencies than symmetric ones. This is because symmetric volutes are vulnerable to larger blockages at the inlet due to the generation of double swirl vortices. Also, a much stronger mixing process is observed to occur in symmetric volutes than overhung volutes, leading to higher losses.

When compared with forward and symmetry volutes, backwardly installed volutes (overhung volutes) show lower total pressure loss and higher static pressure recovery. Also, the speed asymmetry coefficient in backwardly installed volute exit is also slightly higher than that at the forwardly or symmetry-installed volute exits. Further, the backwardly installed volute has a more reasonable and uniform velocity field and better overall performances under different conditions. This is mainly because of the non-uniform outlet flow at the diffuser.
The radial clearance between the volute tongue and the impeller has a significant impact on the pressure distribution inside the volute and hence on the stage performance. Usage of a larger radial clearance distance leads to a reduction in interaction between the volute tongue and impeller. This helps in forming a better smoothened circumferential pressure variation in the tongue region.

However, increasing the radial clearance increases the recirculating flow in the gap between the tongue and the impeller and decreases the volute exit cross-section.
Normally, a volute with zero clearance gives a good volute performance at the design point. However, they are less adaptive to flow condition variations. Smoothening the leading edge of the volute tongue or in other words, providing a radial clearance helps the volute to achieve better efficiencies and stable flow range under off-design conditions also.

To know the optimal radial clearance for the entire flow range for a volute, the gap clearance can be modified either by changing the length of the tongue or by changing the tongue angle. In other words, the position and shape of the tongue become the influencing geometric variable affecting the machine performance. Rounding and retracting the tongue (shortening the tongue) increases the machine head and compression ratio at low and high flow rate conditions. Furthermore, many research authors have reported a significant reduction in noise levels and unbalanced aerodynamic forces when the tongue is moved away from the impeller.
A good amount of studies have been done to understand and quantify the role of these 5 major volute geometric parameters and other minor variables. But, further research work is needed to get further clarify and appreciation for the role of geometric parameters in influencing volute flows. With the availability of parametric modeling software like Caeses and component-specific structured meshing software like Gridpro, a multitude of geometric variants can be build and meshed in an automated environment and CFD simulations performed. With parametric modeling, the level of influence of each of the geometric parameters can be brought out more clearly and accurately. Maybe, after conducting such optimization modeling exercises by independent researchers and organizations, it should be possible to have some general consensus as to what an optimized volute would look like.
1.“Genetic Algorithm Optimization of the Volute Shape of a Centrifugal Compressor”, Martin Heinrich et al, International Journal of Rotating Machinery, Volume 2016, Article ID 4849025, 13 pages.
2. “Effect of inlet configuration and pulsation on turbocharger performance for enhanced energy recovery”, Jose Francisco Cortell Forés et al, PhD thesis, Department of Mechanical Engineering Imperial College London JUNE 2018.
3. “Experimental studies on volute-impeller interactions of centrifugal compressors having vaned diffusers”, Christos Georgakis, PhD Thesis, Academic year 2003.
4. “Unsteady behaviours of a volute in turbocharger turbine under pulsating conditions”, Mingyang Yang et al, JOURNAL OF THE GLOBAL POWER AND PROPULSION SOCIETY, 2017, 1: 237–25.
5. “An investigation of volute cross-sectional shape on turbocharger turbine under pulsating conditions in internal combustion engine”, Mingyang Yang et al, Energy Conversion and Management 105 (2015) 167–177.
6. “The Impact of Volute Aspect Ratio on the Performance of a Mixed Flow Turbine”, Samuel P. Lee et al, Aerospace 2017, 4, 56.
7. “Design and optimization of Turbo compressors”, C. Xu & R.S. Amano, WIT Transactions on State of the Art in Science and Engineering, Vol 42, 2008.
8. “Influence of various volute designs on volute overall performance”, Xiaoqing Qiang et al, Journal of Thermal Science Vol.19, No.6 (2010) 505−513.
9. “Numerical Calculation of the Three-Dimensional Swirling Flow Inside the Centrifugal Pump Volutes” Erkan Ayder, International Journal of Rotating Machinery, 9: 247–253, 2003.
10. “Design considerations for the volutes of centrifugal fans and compressors”, D Pan et al, Proc Instn Mech Engrs Vol 213 Part C, 1999.
11. “Investigation on Effect of Centrifugal Compressor Volute Cross-Section Shape on Performance and Flow Field”, Mohammad Mojaddam et al, Proceedings of the ASME Turbo Expo July 11-15, 2012, Copenhagen, Denmark.
12. “Influence of volute design on flow field distortion and flow stability of turbocharger centrifugal compressors”, Zhenzhong Sun et al, Proc IMechE Part D: J Automobile Engineering 1–11, 2018.
13. “Effect of Diffuser and Volute on Turbocharger Centrifugal Compressor Stability and Performance: Experimental Study”, H. Mohtar et al, Oil & Gas Science and Technology – Rev. IFP Energies nouvelles, Vol. 66 (2011), No. 5, pp. 779-790.
14. “Characterization of the Performance of a Turbocharger Centrifugal Compressor by Component Loss Contributions”, Nima Khoshkalam et al, Energies 2019, 12, 2711.
15. “Optimal design of the volute for a turbocharger radial flow compressor”, Mohammad Mojaddam et al, Proceedings of ASME Turbo Expo 2014: Turbine Technical Conference and Exposition GT2014 June 16 – 20, 2014, Düsseldorf, Germany.
16. “CFD Analysis of the Volute Geometry Effect on the Turbulent Air Flow through the Turbocharger Compressor”, Chehhat Abdelmadjid et al, TerraGreen 13 International Conference 2013 – Advancements in Renewable Energy and Clean Environment, Energy Procedia 36 ( 2013 ) 746 – 755.
17. “Influence of the volute on the flow in a centrifugal compressor of a high-pressure ratio turbocharger”, X Q Zheng et al, Proc. IMechE Vol. 224 Part A: J. Power and Energy, JPE968, July 2010.
18. “Genetic Optimization of Turbomachinery Components using the Volute of a Transonic Centrifugal Compressor as a Case Study“, Martin Heinrich et al, PhD Thesis, Faculty of Mechanical, Process and Energy Engineering of the Technische Universität Bergakademie Freiberg November 22, 2016.
19. https://blog.softinway.com/volute-design-in-axstream/
By subscribing, you'll receive every new post in your inbox. Awesome!
The post Designing Turbocharger Compressor Volutes appeared first on GridPro Blog.
Figure 1: Automated hexahedral meshing for an axial turbine using point cloud mapping.
Word count: 1330 / 7 minutes
Discover a novel approach of automated hexahedral meshing using CAESES GridPro integration, leveraging topology templates and point cloud mapping for efficient, high-quality meshes for CFD. Key techniques like Radial Basis Functions (RBF) morphing ensure precise adaptation to shape variants.
In the realm of CFD mesh generation, scalability is crucial, especially when dealing with multiple design variants. This is where topology template-based approaches, like those offered by GridPro CFD Solutions, shine. These block-based templates are designed with scalability in mind, allowing a single carefully constructed topology to be reused across multiple parametric shapes. This significantly reduces the simulation workflow time and the effort required for meshing, while ensuring that the grid modification remain consistent and self-similar. This consistency is invaluable for accurate comparative studies, where minor deviations in grid structure could otherwise skew results.
The block template-based approach overcomes the limitations seen in traditional structured and unstructured meshing techniques. While unstructured grids are often praised for their ease of mesh modification, they come with drawbacks like a higher number of elements, the need to constantly adjust grid size for shape changes, and compromises on cell control, simulation time, and accuracy. Structured grids, known for their cell quality and simulation accuracy, have traditionally been challenging to apply across numerous design variants due to the manual effort required.
Hexahedral meshing software, GridPro addresses these challenges by allowing simulation engineers to modify topologies manually for significant shape modification and using its in-house mesh smoothing algorithm, Ggrid, to automatically adapt and smoothen the computational mesh for smaller deviations. However, in some cases, the block positioning may not be favourable for Ggrid to ensure good mesh quality, resulting in highly skewed or folded cells.
To further streamline the process, GridPro has developed a topology mapping feature in collaboration with Caeses, that automatically maps the topology from the baseline model to its shape variants. This is achieved by using point cloud pairs to map the topology from the baseline model to the variation, ensuring that even complex design variations maintain the same level of grid quality as the original model. This optimization-based mesh morphing saves time and enhances the accuracy and reliability of simulations across multiple design iterations.
In computational fluid dynamics (CFD) and design optimization workflows, especially those involving parametric studies, mesh quality and consistency play a critical role in ensuring accurate and comparable simulation results.
Unstructured grids, while easier to adapt, have certain limitations and disadvantages:
In contrast, structured grids provide:
Topology morphing in CFD based on changes in geometry is a crucial concept. It involves adapting a predefined mesh topology to fit a parametrically changing geometry while preserving essential properties such as topology preservation, element size, aspect ratio, and overall mesh quality. This process ensures that the computational domain remains accurate and functional as the geometric design evolves.
In practice, topology adjustment can be achieved through various methods. One common approach is the spring analogy, where topology elements are connected by imaginary springs. When the geometry deforms, these springs adjust the block automatically, helping maintain a smooth transition. Additionally, smoothing algorithms can be applied to refine the block quality after the boundary nodes have been adjusted.
A more advanced technique involves using Radial Basis Function (RBF) Interpolation to fine-tune node positioning in response to shape deformation. This method is particularly effective for ensuring that the topology conforms precisely to the deformed design variants.

In our workflow, two similar parametric models are compared, we identify a random set of nodes on the initial and deformed geometries and create a map. This map is further used to morph the topology from one design to another.
By leveraging these techniques, we can effectively morph the topology to accommodate changes in geometry, ensuring consistent grid generation for accurate and reliable simulations.


To test this new adaptive meshing, an axial turbine blade was selected as the first test case. Initially, a baseline wireframe topology for the turbine blade is constructed manually in GridPro meshing software using the UI. This is the only step requiring human intervention. Once the baseline topology is established, it serves as a template for automated mesh generation across various design variants within the Caeses platform.

Next, GridPro is integrated into Caeses using an integration script, creating a closed-loop system. The script is designed to manage surface mesh generation, CAD file conversion, topology adaptation, and grid generation. In this setup, Caeses parametrically modifies the axial turbine blade shape to produce different variants, while GridPro automatically generates multi-block structured meshes for each variant without further user involvement.

For the axial turbine test case, 50 parametric modeling variants were generated by varying 7 parametric variables. The baseline topology, created in approximately 45 minutes, was used as a template to generate structured grids for the remaining 49 design exploration variants. The entire process took around 350 minutes, or roughly 6 hours.





Ready to Automate Your Meshing Workflow?
GridPro’s intelligent structured meshing automation solution reduces manual effort and maximizes accuracy—making it ideal for design optimization in CFD.
Schedule a free demo or contact us to see how GridPro can accelerate your simulation pipeline.
The adopted approach is effective in automating parametric geometric meshing. The developed workflow, which utilizes topology templates and point cloud mapping, significantly reduces the manual effort traditionally required in structured mesh modification. By leveraging techniques such as Radial Basis Function (RBF) interpolation in meshing, the method ensures that topology adapts accurately to geometric changes, preserving mesh quality and ensuring reliable simulation outcomes.
The successful application of this methodology to axial turbines, radial turbines, exit casings, compressor volutes, and centrifugal compressors demonstrates its efficiency in generating high-quality grids for multiple design variants with minimal user input. The ability to mesh 50 geometric variants within six hours highlights the approach’s scalability and robustness, reinforcing its potential for widespread adoption in industrial applications of engineering simulation.
We sincerely thank Caeses for providing all the geometries, which was crucial for generating the structured mesh. More details about Caeses work can be found at Caeses Shape deformation and morphing.
By subscribing, you'll receive every new post in your inbox. Awesome!
The post Automated Hexahedral Meshing with GridPro: Structured Meshes for Parametric Geometry Variants appeared first on GridPro Blog.
Figure 1: Structured multiblock mesh for Turbocharger compressors.
Word count: 1358 / 7 minutes
Optimizing Turbocharger Performance with CFD-Driven Compressor Design and Automated GridPro Meshing Tools
Turbochargers have transformed internal combustion engines by significantly boosting power output without increasing engine size. At the core of this innovation is the centrifugal compressor, a key component responsible for compressing and supplying air to enhance combustion efficiency. Its performance depends on careful impeller design, aerodynamic optimization, and advanced computational techniques.
CFD simulation plays a crucial role in refining compressor aerodynamics, allowing engineers to enhance compressor efficiency and turbocharger performance. Structured meshing for compressors further improves the accuracy of these simulations. Advanced structured meshing GridPro tools like Xpress Volute and Xpress Blade automate the hexahedral meshing process, reducing design iteration time while ensuring high-quality grids, which are essential for precise CFD analysis for compressors and performance optimization.
The compressor in a turbocharger plays a crucial role in enhancing engine performance by increasing the density of the intake air. By compressing incoming air, it ensures a higher oxygen supply, which leads to more efficient combustion, improved fuel efficiency in turbocharged engines, and greater power output. The effectiveness of the compressor directly influences the overall efficiency of the turbocharger, making its design a key aspect of performance optimization.
Several factors impact compressor performance, with the pressure ratio being one of the most significant. This determines the level of air compression achieved, directly affecting engine output. To deliver the required mass flow without instability, the compressor’s flow characteristics must be carefully designed. Achieving the right balance between these factors ensures maximum efficiency, durability, and aerodynamic performance.

Compressor efficiency plays a crucial role in determining the overall performance of both the turbocharger design and the engine. One of its most significant impacts is on fuel efficiency in turbocharged engines. In a turbocharged engine, higher compressor efficiency reduces the energy required for the pumping cycle, directly improving Brake Specific Fuel Consumption (BSFC). By minimizing energy losses, an efficient compressor ensures that more of the fuel’s energy is converted into useful work rather than being wasted.
A more efficient compressor also contributes to lower emissions by reducing fuel consumption, helping engines comply with increasingly stringent environmental regulations. In heavy-duty applications, where high pressure ratios are required for effective combustion, improved efficiency ensures that the necessary boost is achieved with minimal energy input. This not only enhances performance but also expands the compressor’s operating range, allowing it to function effectively across various engine speeds and loads. Additionally, optimizing compressor aerodynamics reduces noise generation, an essential consideration in modern turbomachinery design.
Designing highly efficient compressors presents a range of complex challenges that engineers must carefully address to optimize turbocharger performance. One of the most significant difficulties arises from the high tip speeds at which turbocharger compressors operate. These high speeds create intricate flow structures, including shock waves in transonic designs, which can lead to substantial efficiency losses. Managing these effects requires precise aerodynamic optimization to minimize performance penalties.
Another critical challenge is tip leakage, where airflow escapes through the gap between the blade tip and the casing. This leakage not only reduces efficiency but also increases noise levels, making it essential to develop sealing techniques and design strategies that minimize these losses. Many modern compressors incorporate splitter blades to extend their operating range, but their effectiveness depends heavily on proper design. Poorly designed splitter blades can disrupt airflow, leading to mismatches and reduced overall efficiency.
In addition to aerodynamic considerations, modern compressors must also meet stringent noise-reduction requirements. Balancing high aerodynamic efficiency with low noise emissions is a major challenge, requiring innovative aeroacoustic optimization. Engineers must also navigate the trade-offs between achieving a wide operating range and maintaining high efficiency, as improving one often comes at the cost of the other.
Furthermore, traditional manufacturing constraints can limit the ability to implement optimal blade designs, though advancements in precision manufacturing techniques, such as point milling, are helping to overcome these limitations.
Addressing these challenges demands a combination of advanced computational tools, innovative design approaches, and cutting-edge manufacturing solutions.

CFD simulation plays a vital role in modern compressor design by offering detailed insights into fluid dynamics within the impeller and volute. With CFD, engineers can analyze complex flow structures, turbulence, and loss mechanisms, such as shock waves, tip leakage, and secondary flow. This allows for the optimization of compressor aerodynamics and also helps to minimize performance losses.
Moreover, CFD analysis for compressors enables engineers to assess how design changes impact key parameters like compressor efficiency, pressure ratio, and operating range. It also provides the capability to evaluate compressor performance under various operating conditions, including off-design scenarios. By reducing the need for extensive physical testing, CFD accelerates the design process, identifies potential surge and stall conditions, and ultimately enhances the reliability and performance of the compressor.
By leveraging CFD simulations, engineers can iteratively refine designs to ensure optimal performance and reduced time-to-market.

Mesh generation is a critical component in achieving accurate CFD simulations for compressor analysis. It defines the resolution of the flow field and plays a significant role in the stability and convergence of the numerical simulation. The density of the mesh is a key consideration, as a finer mesh provides higher resolution but also increases computational costs. Mesh sensitivity studies help identify the optimal density, ensuring that the solution is independent of the mesh size.
Another important factor is the resolution of the boundary layer, which is essential for accurately capturing wall effects and predicting losses in the flow. Grid smoothness is equally crucial as it helps minimize numerical errors and ensures stable simulations. The mesh must also meet certain quality standards, such as maintaining proper aspect ratio, minimum angle, and expansion factor to guarantee reliable results.
By carefully designing and optimizing the mesh, CFD simulations can accurately capture complex flow phenomena like tip leakage, secondary flows, and shock waves, all of which play a vital role in determining compressor performance.

Hexahedral meshing is widely preferred in the CFD analysis of compressors because it offers superior accuracy and computational efficiency. One of the main advantages of hexahedral grids is their ability to reduce numerical diffusion, leading to more precise flow predictions, particularly in complex phenomena such as boundary layers and shock waves.
In addition to better accuracy, these meshes require fewer elements to achieve high resolution, which lowers computational costs and improves solver efficiency. Hexahedral meshes also enhance convergence stability by supporting smoother flow transitions and more accurate gradient resolution, making the simulation process more reliable. Furthermore, they provide efficient boundary layer capture, as their structured nature allows for well-aligned cells near solid walls, crucial for accurate near-wall flow predictions.
These characteristics make structured hexahedral meshing the ideal choice for critical regions in compressor design, such as the impeller passage, volute tongue and vaneless diffuser, where precise flow analysis is essential.

GridPro’s automated meshing tools simplify and accelerate the meshing process for compressor impellers and volutes. One of the key advantages of GridPro is its ability to enhance solution accuracy. With features like the Xpress Volute and Xpress Blade meshing tools, it captures complex flow fields with high precision, leading to better performance predictions.
The tool’s versatile blocking structure adapts to various geometric variations, providing flexibility in design. By automating the meshing process, GridPro minimizes manual errors, ensuring consistent mesh quality and improving the overall integrity of the simulation. This automation also accelerates the workflow, allowing for faster iterations and quicker optimization, which is essential for effective compressor design.
Additionally, GridPro seamlessly integrates with CAD tools and flow solvers, streamlining both the design and simulation phases. The software excels at capturing intricate flow dynamics, such as swirl patterns and the tongue region, which are crucial for optimizing turbomachinery performance. With features like 1-1 connected meshing, it improves accuracy in tip flow simulations, ultimately reducing CFD simulation time while maintaining high reliability and accuracy.
Compressors are fundamental to turbocharger performance, and their design requires detailed CFD analysis to ensure efficiency and reliability. Structured hexahedral meshing plays a crucial role in obtaining accurate CFD results, and GridPro’s automated meshing tools streamline the process, reducing time while maintaining precision. As turbocharger technology advances, leveraging automated meshing and high-fidelity CFD simulations will continue to be essential in achieving optimal compressor designs.
We sincerely thank CFDsupport for providing the compressor geometry, which was crucial for generating the structured mesh. The compressor model was created using CFturbo software. More details about CFDsupport’s work can be found at Centrifugal Compressor.
1. “3D Multi-Disciplinary Inverse Design Based Optimization of a Centrifugal Compressor Impeller”, 2013.
2. “A 3D Automatic Optimization Strategy for Design of Centrifugal Compressor Impeller Blades” – K.F.C. Yiu and M. Zangeneh, ASME, 1998.
4. “3. “ADT Publication_A DETAILED LOSS ANALYSIS METHODOLOGY FOR CENTRIFUGAL COMPRESSORS”, 2019.
5. “Application of 3D Inverse Design Method on a Transonic Compressor Stage“, 2021.
6. “Design and optimization of compressor for a fuel cell system on a commercial truck under real driving conditions”, Institution of Mechanical Engineers, 2023.
7. “Design of a Centrifugal Compressor Stage and a Radial-Inflow Turbine Stage for a Supercritical CO2 Recompression Brayton Cycle by Using 3D Inverse Design Method“, 2017.
8. “Design of a Mixed-flow Transonic Compressor for Active High-lift System Using a 3D Inverse Design Methodology”, ASME, 2020.
9. “Development of a high performance centrifugal compressor using a 3D inverse design technique“, 2010.
10. “Investigation of an Inversely Design Centrifugal Compressor Stage” – M. Schleer, S. S. Hong, M. Zangeneh, ASME, 2003.
11. “Multi Objective Design of a Transonic Turbocharger Compressor with Reduced Noise and Increased Efficiency“, ASME, 2019.
12. “Multi-point Optimisation of an Industrial Centrifugal Compressor with Return Channel by 3D Inverse Design”, ASME Turbo Expo.
13. “Optimization of 6.2to1 Pressure Ratio Centrifugal Compressor Impeller by 3D Inverse Design”, ASME Turbo Expo, 2011.
14. “Redesign Of A Transonic Compressor Rotor By Means Of A ThreeDimentional Inverse Design Method A Parametric Study“, ASME, 2005.
15. “Redesign of a Compressor Stage for a High-performance Electric Supercharger in a Heavily Downsized Engine”, ASME, 2017.
16. “Tandem Blade Centrifugal Compressor Design Optimization 3D Inverse Design” – Ricardo Oliveira, Luying Zhang, European Conference on Turbomachinery Fluid dynamics & Thermodynamics, 2023.
17. “The Design of High Temperature Heat Pump Compressor Using the Inverse Method“, ASME, 2023.
By subscribing, you'll receive every new post in your inbox. Awesome!
The post Enhancing Turbocharger Efficiency with CFD and Automated Meshing Tools appeared first on GridPro Blog.
I ran a new quiet-supersonic study at Mach 1.45 and 55,000 ft using the built-in atmosphere tables and Cartesian solver in Stallion 3D. The goal was to reproduce and understand the kind of pressure distribution seen in the NASA X-59 QueSST demonstrator, which recently completed its first flight. The idea is the same: manage the shock pattern so the ground hears a soft “thump” instead of a sonic boom.
The simulation shows a controlled series of small compressions marching down the forebody rather than one big, coalesced shock. That’s exactly what quiet-supersonic shaping is about—spreading the pressure rise (Δp/Δx) gradually so the far-field signature becomes a sequence of gentle steps instead of a single N-wave.
At these flight conditions, the distributed shock train is similar to what the X-59 team reported during their low-boom configuration tests. It’s encouraging to see Stallion 3D’s Navier–Stokes solver naturally produce the same kind of flow behavior on a simple Cartesian grid.
Right behind the cockpit, a red-blue compression and expansion pattern forms where the fuselage grows into the wing root. This region is a classic challenge in supersonic design—where cross-section growth and lifting surfaces meet, shocks can thicken and contribute to secondary noise.
It’s good to see that Stallion 3D’s refinement zone resolves these local gradients clearly, without any hand-built body-fitted grid. The automatic cell concentration gives an accurate look at how geometry transitions affect both drag and acoustic signature.
The aft wing and tail surfaces are doing real aerodynamic work. The pressure remains mostly clean, but there are still distinct compression and expansion regions being shed downstream.
In low-boom design, the rear shaping is as important as the nose. The aft body determines how the pressure signature closes—the part that controls how the sonic waveform ends. That’s the part that often separates a “thump” from a “bang.”
The local grid density around the aircraft shows that the refinement box is working exactly as intended. It captures oblique shocks and shear layers efficiently, even at Mach 1.45, without requiring a fitted mesh.
From a numerical standpoint, this confirms that Stallion 3D’s Cartesian method is practical for supersonic concept studies—especially for early X-59-style configurations or general quiet supersonic transport layouts.
The run used true high-altitude conditions (55,000 ft, Mach 1.45) from the built-in atmosphere model. These are the same conditions typically quoted for quiet-supersonic cruise tests and community response research under NASA’s QueSST program.
That realism matters for both acoustics and aerodynamics. At these pressures and densities, thin, swept lifting surfaces behave differently than they do in low-altitude transonic tests.
This quiet-supersonic run demonstrates what Stallion 3D does best—showing real aerodynamic detail from first principles without external meshing or post-processors. The solver’s ability to capture distributed shocks, canopy interactions, and aft-body effects all in one pass makes it an effective tool for early design of low-boom aircraft like the X-59 QueSST.
It’s not about pretty colors; it’s about credible data at real flight conditions. The results show a clean, believable Mach 1.45 solution with controlled shock structure—the kind of solution that points the way toward practical, certifiable overland supersonic transport.
Do Fish Swim Like Multi-Element Airfoils?
In nature, a school of fish moves as a coordinated system. Each fish swims in the wake of another, taking advantage of pressure differences and induced flows that reduce drag and save energy. It’s a clean example of fluid mechanics at work — and not too different from how engineers design multi-element airfoils for high lift.
The image above shows a simulation created from fish-shaped outlines. The shapes were first traced as simple drawings and then captured using Airfoil Digitizer. Airfoil Digitizer lets you turn almost any outline — hand-drawn, scanned, or imported — into an analysis-ready shape. You are not limited to NACA airfoils or standard sections. If you can sketch it, you can analyze it.
After digitizing the shapes, I placed them together and ran a potential flow solution in MultiElement Airfoils. This solver computes the velocity and pressure field around multiple bodies at once, and shows how they interact. The colored contours represent pressure: blue for low (suction) regions and red for higher pressure. You can see how each “fish airfoil” changes the flow around its neighbors, very much like the interaction between a slat, a main wing, and a flap.
This is the interesting part: even with playful shapes, the physics is still there. You get wake shielding, suction peaks, and local acceleration in the gaps. That’s the same family of effects we care about in real applications — multi-element wings, hydrofoils, propeller/wing interference, and UAV control surfaces working close together.
The workflow here was:
1) Sketch or outline a shape
2) Capture it with Airfoil Digitizer
3) Arrange multiple elements and solve the flow in MultiElement Airfoils
4) Visualize pressure and interaction
It’s a fun demonstration, but also a serious one. Airfoil Digitizer gives you full control over the geometry. MultiElement Airfoils lets you study how multiple lifting surfaces behave together, not just one at a time. Together they make it easy to explore ideas, test concepts, and see the aerodynamics before you ever build a model.
Visit ➡️ https://www.hanleyinnovations.com
Best regards,
Patrick
For more information, please visit https://www.hanleyinnovations.com
Solved by Default 🛠️ — Import • Mesh • Solve in Stallion 3DA quick walkthrough by Dr. Patrick Hanley (Hanley Innovations) |
Click the image to watch the short demo on YouTube. |
|
In this quick demo, we import a drone STL, let Stallion 3D auto‑configure the CFD boundaries and domain sizing, pick a sensible default mesh, and run the solver—going from geometry to pressure contours in minutes. |
What the video covers
|
Why this workflow is fast
|
|
Watch the short demo
Prefer reading? Reply with questions—happy to help you try this on your geometry. |
|
© Hanley Innovations • This email is informational. Video & demo: Dr. Patrick Hanley. |
How they work for aircraft takeoff/landing, motorsports downforce,
and how they are designed by engineers.
A multi-element airfoil is a lifting surface made from two or more cooperating profiles—typically a main element plus leading-edge slats and/or trailing-edge flaps. By carefully positioning the elements (gaps, overlaps, and deflections), designers dramatically increase lift (for aircraft) or downforce (for cars) at low to moderate speeds without making the wing excessively large.
In practice, a single-element airfoil might achieve a CL,max around ~1.4 (order of magnitude), while a well-designed multi-element system can exceed ~2.5–3.0+ depending on geometry, Reynolds number, and deflection schedule. (For cars, think of “negative lift” or downforce rather than positive lift.)
Airliners need huge lift at low speeds to operate from practical runways. On approach and takeoff, they deploy leading-edge slats and multi-segment trailing-edge flaps to raise CL,max, allowing lower approach speeds, shorter distances, and improved safety margins. In cruise, devices retract to reduce drag.
Many business jets and turboprops use slats and flaps for field performance. Short-takeoff-and-landing (STOL) aircraft do the same, sometimes adding devices like fences, cuffs, Krueger flaps, or blown flaps to energize flow and improve controllability near stall.
UAVs benefit from high-lift systems for heavier payloads or shorter fields. Multi-element tails or deployable flaps are common on fixed-wing drones that must launch and recover in tight spaces.
Racing wings often use two or more elements (plus Gurney flaps) to produce large downforce at modest speeds, improving grip in braking and cornering. Rules usually cap element count and geometry, so careful design of slot gap, overlap, and flap angle is crucial to hit the aero targets without stalling the wing.
Hanley Innovations provides tools that streamline multi-element airfoil and wing design—from early concepts to practical, test-ready geometries:
Ready to accelerate your high-lift or downforce project?
Visit Hanley Innovations to explore MultiElement Airfoils, 3DFoil, and Stallion 3D.
How many elements are “too many”?
Diminishing returns set in as mechanical complexity, drag, and sensitivity increase. Most practical systems use one slat and
one or two flap elements; motorsports rules often limit element count explicitly.
Do Gurney flaps count as an element?
They’re typically treated as a device on an element rather than a full element, but they can significantly boost
lift/downforce at the right Reynolds numbers.
What’s the most sensitive parameter?
The slot (gap and overlap) and the deflection schedule. Small tweaks here can change peak performance and stall character.
Altitude is great—but control and stability win flights. In this video, I show how to take your rocket’s STL file and run a complete CFD analysis in Stallion 3D so you can predict side force, spin tendency, and CP shift before launch.
Estimates for CG/CP are a start, but they miss critical effects—fin misalignment, transonic bumps, and asymmetric forces. Stallion 3D gives you the full aerodynamic picture so launches are straighter, faster, and more reliable.
Smarter launches start here. — hanley@hanleyinnovations.com
A Perfect Celebration
On December 5-7, 2024, a symposium, Emerging Trends in Computational Fluid Dynamics: Towards Industrial Applications, was successfully held at Stanford University to celebrate the 90th birthday of CFD legend, Professor Antony Jameson. I am very grateful to Antony for giving Professor Chongam Kim and myself an opportunity to celebrate our 60th birthday in conjunction with his. Thus, the symposium is also called the Jameson-Kim-Wang (JKW) symposium.
An organizing committee led by Professor Siva Nadarajah (McGill University) and composed of Professors Chunlei Liang (Clarkson University), Meilin Yu (UMBC), and Hojun You (Sejong University) did a fantastic job in organizing a flawless symposium. The list of speakers includes the who's who and rising stars in CFD. A special shoutout goes to Professor Juan Alonso and the sponsors for their support of the Symposium. A photo of the attendees is shown in Figure 1. Some good-looking posters from the sponsors are shown in Figure 2.
Antony's many pioneering contributions to CFD have been well documented in the literature. His various CFD and design optimization codes have shaped the design of commercial aircraft for many decades. Several aircraft manufacturers told stories about Antony's impact. We look forward to the release of the Symposium videos next year.
Next, I'd like to touch upon my personal connection to Antony. I first heard of his name and his work in China from my graduate advisor, Academician Zhang Hanxin. I still recall reading his paper on the successes and challenges in computational aerodynamics. I believe I first met Antony at an AIAA conference when he came to my talk on conservative Chimera. I did not get an opportunity to introduce myself. Our 2nd meeting took place in China during an Asian CFD conference in 2000 where both of us were invited speakers. We sat at the same table with Charlotte (Mrs. Jameson) in a banquet. This time I was able to properly introduce myself.
Soon after that, we started collaborating on high-order methods, from spectral difference to flux reconstruction. I visited Antony's lab and co-organized his 70th birthday celebration at Stanford in late 2004. During a visit to his home, Antony shared his fascination on the aerodynamics of hummingbirds. I still recall receiving his phone call about proving the stability of the SD method with Gauss points as the flux points on a Saturday when I was at my son's soccer game!
The Symposium also gave me an opportunity to see many of my former students, some of whom I have not seen for more than two decades: Yanbing, Khyati, Prasad, Chunlei, Varun, Takanori, Meilin, Lei, Cheng, Feilin, Eduardo and Justin. It is very gratifying to hear their stories after so many years.
The Symposium concluded with an amazing banquet. My friend and collaborator, H.T. Huynh, did a hilarious roast of me and I cannot stop laughing the whole time. H.T. has the talent of a standup comedian. Everything went smoothly and we had a perfect symposium!
In the computation of turbulent flow, there are three main approaches: Reynolds averaged Navier-Stokes (RANS), large eddy simulation (LES), and direct numerical simulation (DNS). LES and DNS belong to the scale-resolving methods, in which some turbulent scales (or eddies) are resolved rather than modeled. In contrast to LES, all turbulent scales are modeled in RANS.
Another scale-resolving method is the hybrid RANS/LES approach, in which the boundary layer is computed with a RANS approach while some turbulent scales outside the boundary layer are resolved, as shown in Figure 1. In this figure, the red arrows denote resolved turbulent eddies and their relative size.
Depending on whether near-wall eddies are resolved or modeled, LES can be further divided into two types: wall-resolved LES (WRLES) and wall-modeled LES (WMLES). To resolve the near-wall eddies, the mesh needs to have enough resolution in both the wall-normal (y+ ~ 1) and wall-parallel directions (x+ and z+ ~ 10-50) in terms of the wall viscous scale as shown in Figure 1. For high-Reyolds number flows, the cost of resolving these near-wall eddies can be prohibitively high because of their small size.
In WMLES, the eddies in the outer part of the boundary layer are resolved while the near-wall eddies are modeled as shown in Figure 1. The near-wall mesh size in both the wall-normal and wall-parallel directions is on the order of a fraction of the boundary layer thickness. Wall-model data in the form of velocity, density, and viscosity are obtained from the eddy-resolved region of the boundary layer and used to compute the wall shear stress. The shear stress is then used as a boundary condition to update the flow variables.
During the past summer, AIAA successfully organized the 4th High Lift Prediction Workshop (HLPW-4) concurrently with the 3rd Geometry and Mesh Generation Workshop (GMGW-3), and the results are documented on a NASA website. For the first time in the workshop's history, scale-resolving approaches have been included in addition to the Reynolds Averaged Navier-Stokes (RANS) approach. Such approaches were covered by three Technology Focus Groups (TFGs): High Order Discretization, Hybrid RANS/LES, Wall-Modeled LES (WMLES) and Lattice-Boltzmann.
The benchmark problem is the well-known NASA high-lift Common Research Model (CRM-HL), which is shown in the following figure. It contains many difficult-to-mesh features such as narrow gaps and slat brackets. The Reynolds number based on the mean aerodynamic chord (MAC) is 5.49 million, which makes wall-resolved LES (WRLES) prohibitively expensive.
![]() |
| The geometry of the high lift Common Research Model |
University of Kansas (KU) participated in two TFGs: High Order Discretization and WMLES. We learned a lot during the productive discussions in both TFGs. Our workshop results demonstrated the potential of high-order LES in reducing the number of degrees of freedom (DOFs) but also contained some inconsistency in the surface oil-flow prediction. After the workshop, we continued to refine the WMLES methodology. With the addition of an explicit subgrid-scale (SGS) model, the wall-adapting local eddy-viscosity (WALE) model, and the use of an isotropic tetrahedral mesh produced by the Barcelona Supercomputing Center, we obtained very good results in comparison to the experimental data.
At the angle of attack of 19.57 degrees (free-air), the computed surface oil flows agree well with the experiment with a 4th-order method using a mesh of 2 million isotropic tetrahedral elements (for a total of 42 million DOFs/equation), as shown in the following figures. The pizza-slice-like separations and the critical points on the engine nacelle are captured well. Almost all computations produced a separation bubble on top of the nacelle, which was not observed in the experiment. This difference may be caused by a wire near the tip of the nacelle used to trip the flow in the experiment. The computed lift coefficient is within 2.5% of the experimental value. A movie is shown here.
![]() |
| Comparison of surface oil flows between computation and experiment |
![]() |
| Comparison of surface oil flows between computation and experiment |
Multiple international workshops on high-order CFD methods (e.g., 1, 2, 3, 4, 5) have demonstrated the advantage of high-order methods for scale-resolving simulation such as large eddy simulation (LES) and direct numerical simulation (DNS). The most popular benchmark from the workshops has been the Taylor-Green (TG) vortex case. I believe the following reasons contributed to its popularity:
Using this case, we are able to assess the relative efficiency of high-order schemes over a 2nd order one with the 3-stage SSP Runge-Kutta algorithm for time integration. The 3rd order FR/CPR scheme turns out to be 55 times faster than the 2nd order scheme to achieve a similar resolution. The results will be presented in the upcoming 2021 AIAA Aviation Forum.
Unfortunately the TG vortex case cannot assess turbulence-wall interactions. To overcome this deficiency, we recommend the well-known Taylor-Couette (TC) flow, as shown in Figure 1.
Figure 1. Schematic of the Taylor-Couette flow (r_i/r_o = 1/2)
The problem has a simple geometry and boundary conditions. The Reynolds number (Re) is based on the gap width and the inner wall velocity. When Re is low (~10), the problem has a steady laminar solution, which can be used to verify the order of accuracy for high-order mesh implementations. We choose Re = 4000, at which the flow is turbulent. In addition, we mimic the TG vortex by designing a smooth initial condition, and also employing enstrophy as the resolution indicator. Enstrophy is the integrated vorticity magnitude squared, which has been an excellent resolution indicator for the TG vortex. Through a p-refinement study, we are able to establish the DNS resolution. The DNS data can be used to evaluate the performance of LES methods and tools.
Figure 2. Enstrophy histories in a p-refinement study
Happy 2021!
The year of 2020 will be remembered in history more than the year of 1918, when the last great pandemic hit the globe. As we speak, daily new cases in the US are on the order of 200,000, while the daily death toll oscillates around 3,000. According to many infectious disease experts, the darkest days may still be to come. In the next three months, we all need to do our very best by wearing a mask, practicing social distancing and washing our hands. We are also seeing a glimmer of hope with several recently approved COVID vaccines.
2020 will be remembered more for what Trump tried and is still trying to do, to overturn the results of a fair election. His accusations of wide-spread election fraud were proven wrong in Georgia and Wisconsin through multiple hand recounts. If there was any truth to the accusations, the paper recounts would have uncovered the fraud because computer hackers or software cannot change paper votes.
Trump's dictatorial habits were there for the world to see in the last four years. Given another 4-year term, he might just turn a democracy into a Trump dictatorship. That's precisely why so many voted in the middle of a pandemic. Biden won the popular vote by over 7 million, and won the electoral college in a landslide. Many churchgoers support Trump because they dislike Democrats' stances on abortion, LGBT rights, et al. However, if a Trump dictatorship becomes reality, religious freedom may not exist any more in the US.
Is the darkest day going to be January 6th, 2021, when Trump will make a last-ditch effort to overturn the election results in the Electoral College certification process? Everybody knows it is futile, but it will give Trump another opportunity to extort money from his supporters.
But, the dawn will always come. Biden will be the president on January 20, 2021, and the pandemic will be over, perhaps as soon as 2021.
The future of CFD is, however, as bright as ever. On the front of large eddy simulation (LES), high-order methods and GPU computing are making LES more efficient and affordable. See a recent story from GE.

![]() |
| Figure 1. Various discretization stencils for the red point |
The investigation of near-critical state fluid jets is an important problem for various engineering applications such as propulsion and thermal systems. In these contexts, ejectors are used to convert flow work into kinetic energy and, ultimately, into a pressure lift in various systems, including gas turbines, liquid propulsion systems, and refrigeration systems. The ejector operating principle relies on a high-speed jet in single- or multi-phase conditions. The efficiency of ejector devices depends on the physics of the jet, especially under multi-phase operations.
Ejector components are used in various engineering applications as expansion recovery devices. Specifically, ejectors are flow devices that convert kinetic energy into pressure recovery. Different types of ejectors exist based on the application. For example, ejectors used for gas turbine cooling expand high-pressure gas with mixing gas-phase fluid, which increases volumetric efficiency of the combustor; ejectors used in refrigeration systems expand the liquid-phase of fluid with mixing gas-phase fluid, developing a two-phase fluid and decreasing compressor work input. Figure 1 presents a schematic diagram of a multi-phase ejector. The inlet of an ejector, often known as “motive”, contains liquid at high pressure, and the suction contains vapor phase of the same or a different fluid. High-pressure liquid that flows out of the motive throat induces a negative pressure gradient in the suction throat by increasing kinetic energy, which develops a suction effect on the vapor flowing through the suction inlet. These distinct vapor and liquid flows mix downstream through the mixing zone and expand in the following diffuser zone, increasing the pressure.
Ejectors are not only used in refrigeration systems, but they are also widely applied in oil and natural gas systems for waste gas recovery processes and in gas turbines to enhance cooling performance by improving compressor entrainment efficiency.
Improving the design and operational performance of ejectors in a refrigeration system is linked to reduction of entropy generation. Entropy production restricts the coefficient of performance (COP) of the system from further improvement. Local exergy analysis of an ejector operating with carbon dioxide (CO2) as the fluid in a two-phase regime shows that entropy generation in the mixing zone is 2.92 times higher than in the diffuser zone. High entropy generation in the mixing zone is linked to a turbulence evolution mechanism at the shear layer of the jet, where entropy generation is related to turbulence length scales. The operation of ejectors relies on the physics and control strategy of the shear layer (for single phase) and the liquid-gas interface (for multi-phase), putting restrictions on improving the COP of the system. Therefore, understanding the evolution of shear layer turbulence and the mechanism of liquid-gas interface instabilities on the jet inside ejectors could provide new insights to decrease entropy generation and maximize the COP of the system.

This current research, in collaboration with a technical team from Bechtel, aims to understand the various stages inside the ejector to identify pathways to improve the ejector efficiency. The ejector of interest is a liquid-vapor variable-geometry CO2 ejector, as shown in Figure 1. The flow inside the ejector comprises a subcooled jet, which is the primary energy input to the ejector; a gaseous suction flow, which increases the cooling capacity of the ejector cycle through work recovery; a mixing zone, where entrainment of the suction flow into the motive flow occurs; and a diffuser, which increases pressure and reduces the work required by a compressor. To conduct the analyses, a high-fidelity computational fluid dynamics (CFD) model is needed to resolve the boundary layers and interphase phenomena.
The computational domain of the ejector, shown in Figure 2,1,2,3,4 is modeled in cylindrical coordinates with axial (x), radial (r), and azimuthal (θ) directions. The domain includes the motive inlet (x/d = −13), suction inlet (−11 ≤ x/d ≤ −8), diffuser outlet (x/d = 23), and adiabatic no-slip walls. Boundary conditions are assigned based on experimental data. At the motive inlet, pressure, temperature, and CO2 mass fraction are prescribed, and the inlet gap is tuned to match the measured mass flow rate. The suction inlet is defined by its mass flow rate, temperature, and CO2 composition. The outlet pressure is fixed, with no backflow allowed, and all the walls are treated as adiabatic with a no-slip boundary condition.

CONVERGE CFD software provides a robust platform for simulating complex, unsteady, multi-phase flows with minimal manual meshing. In this study, CONVERGE is used to solve the three-dimensional compressible Navier-Stokes equations coupled with phase transport and large eddy simulations (LES) for CO2 ejector flows. Thermophysical and transport properties are sourced directly from the NIST database, enabling accurate modeling of real-fluid behavior across a wide range of thermodynamic states.
A key advantage of CONVERGE is its automatic cut-cell meshing, which accurately resolves complex geometries without requiring a user-generated mesh. This feature also enables box filtering for LES, ensuring that a large portion (≥80%) of turbulent kinetic energy is resolved. Furthermore, CONVERGE provides full control over subgrid-scale (SGS) models and constants, offering users flexibility comparable to that of in-house CFD codes.
Advanced grid control features include region-based embedding and Adaptive Mesh Refinement (AMR). Embedding refines the mesh locally (down to 0.125 mm), while AMR dynamically adapts grid resolution during the simulation (as fine as 0.0156 mm) based on gradients in velocity, temperature, and phase fraction. This results in a highly detailed, physics-driven mesh (60 million cells) that adapts to flow evolution without remeshing.


Although CONVERGE does not include built-in verification tools, the simulation results have been rigorously validated following the ASME V&V 20 standard. Grid convergence studies reveal negligible numerical uncertainty (≤0.13%), and a comparison with experimental data confirms the model’s predictive capability, with model error bounds of 2.5% ±2.66% for mass flow rate and 0.72% ±1.09% for suction pressure.
Lastly, spectral analysis of turbulent kinetic energy shows a clear inertial subrange with κ−5/3 scaling, confirming that the LES approach and discretization schemes successfully capture the dominant energy transfer mechanisms. Overall, CONVERGE enables high-fidelity simulations of multi-phase, turbulent flows with exceptional automation and accuracy.


The behavior of the motive jet in an ejector is governed by the turbulent structures that develop along the liquid-gas interface, directly influencing flow entrainment. Four distinct regimes are identified based on dominant physical mechanisms:

These regimes coexist and interact within the ejector. The instantaneous jet morphology, shown in Figure 7, is visualized using the spatial distribution of the density ratio ρCO2(g)/ρCO2(l). Grayscale shading ranges from dark (low gas-liquid density ratio) to white (high ratio), indicating interface transitions. The evolution of turbulent coherent structures at the interface is crucial for entrainment performance (m˙ s/m˙ m). The motive jet, an annular co-axial flow, is wall-bounded and subject to a streamwise adverse pressure gradient. Vorticity dynamics drive interface deformation:
In regime R2, Kelvin–Helmholtz instability (KHI) leads to the formation of ring vortices, supported by ωθ from regime R1. Azimuthal instabilities induce periodic bulges in these rings, forming counter-rotating vortex pairs around the jet shear layer. These ωx structures continue to stretch and intensify due to angular momentum conservation, leading to thinner, more energetic vortex formations (Figure 8).1
A detailed understanding of jet morphology within the mixing zone of an ejector is essential for the development of next-generation ejector designs. From a thermodynamic perspective, this region is the primary source of exergy destruction, and its optimization presents an opportunity for significant performance improvements. In this study, the jet morphology has been categorized into distinct flow regimes based on the dominant underlying physics. This regime-based classification lays the groundwork for the development of low-order models that capture only the most relevant physical phenomena, thereby enabling faster and more efficient computational strategies.

Ultimately, this physics-informed modeling approach is expected to accelerate the shape optimization of ejectors across a wide range of applications. The use of CONVERGE CFD software has significantly streamlined this process through its advanced features, particularly automatic meshing and granular numerical control, which align with the company’s guiding principle: “Never Make a Mesh Again.”
For an in-depth discussion of the methodologies and findings, please refer to the following articles:
This research has been funded by the National Bechtel Corporation, USA. We would like to acknowledge the continued support from Mr. David Ladd, Dr. Leonard J. Peltier, and Prof. Ivan C. Christov. The authors would also like to thank Convergent Science Inc. for providing an academic license and technical support through their CONVERGE Academic Program.
Bhaduri, S., Peltier, L.J., Ladd, D., Groll, E.A., and Ziviani, D., “Regimes of a Decelerating Wall-Bounded Multiphase Jet Inside Ejectors,” Physics of Fluids, 37, 2025. DOI: 10.1063/5.0278015
Co-Author:
Allie Yuxin Lin
Marketing Writer II
A few years ago, I lived in a small suburban neighborhood in Portland, Oregon. More than once, as I was driving at a leisurely pace of 30 mph down a local road, someone would whiz by me at an outrageously high speed. While they probably weren’t going at 100 mph (as I would passionately claim to my passenger), it certainly felt like it.
Today, I work at a company that deals with modeling combustion, and that experience is how I taught myself the concept of the deflagration to detonation transition (DDT). If, in some dystopic universe, my reality and the speedster’s reality were merged into one, that new car would be going steady at 30 mph and then suddenly accelerating to 100 mph in under a second, theoretically experiencing DDT.
DDT is defined as the process where a slow-moving flame (i.e., my car) rapidly accelerates to a supersonic detonation wave (i.e., the speedster’s car). The microseconds leading up to DDT are known as flame acceleration (FA), and these phenomena are typically studied together. Conventionally, FA and DDT are studied in large-scale settings such as supernova explosions, large shock tubes, or coal mine passages. However, emissions regulations and the rising demand for more compact energy systems have also motivated their study in much smaller settings such as microchannels. These devices offer enhanced heat and mass transfer with lower manufacturing costs and are used in a variety of applications, including electronics cooling, biological systems, and HVAC devices. However, combustible fuel mixtures are more prone to detonating when passing through the highly confined passageways of microchannels, which are similar in size to the diameter of a single strand of hair. Studying FA and the ensuing DDT in microchannels can increase our understanding of the conditions that trigger detonation and enable better control and mitigation strategies in high-pressure systems.
Much of the existing literature on explosion safety has centered on investigating the effect of thermal wall boundary conditions, which play a significant role in flame propagation by affecting heat loss, flame stability, and ignition behavior. Another factor that can influence flame propagation and detonation is heterogeneous chemistry, in particular, surface reactions at catalytic walls. In micro-reactors, reactive catalytic wall coatings can alter and induce chemical exchange at the wall, affecting the FA and DDT process. Catalytic walls provide a surface on which fuel/air mixtures can react; this heterogeneous combustion takes place on the catalyst surface, rather than in the gas phase. The bulk of the catalytic combustion literature has focused on catalytic combustion over noble metals such as platinum or rhodium. By contrast, transition metals like nickel have only been studied for chemical reforming, a process that alters the molecular structure of hydrocarbons to produce other chemicals. In this study, Suryanarayan (Surya) Ramachandran, a Ph.D. candidate at the University of Minnesota Twin Cities, teamed up with Professor Suo Yang and research engineers at ExxonMobil Technology. They examined hydrogen ignition and flame propagation in a microchannel with catalytic nickel walls, where the highly confined environment of the microchannel prompted additional concerns of FA and DDT.1 I’ll hand it over to Surya to tell us about his research!
Co-Author:
Suryanarayan Ramachandran
Ph.D Candidate,
University of Minnesota
In an ideal hydrogen combustion system, the fuel/oxidizer mixture would consist of hydrogen, with oxygen and nitrogen coming from the air. In industrial settings, some combustion products, such as water, may make their way back into the fuel/oxidizer mix. As a result, the mixture becomes highly vitiated with H2O. To mimic a realistic combustion scenario, we simulated combustion with the mixture of hydrogen, oxygen, nitrogen, and water. This mixture, which is named case C1, showed no detonation.
This didn’t really answer any of our questions, since our research group set out to understand DDT. The C1 case didn’t show any detonation, so we wanted to figure out why it didn’t explode and if there would be another mixture that would actually show some kind of detonation. So, I thought, why not remove the water? The water isn’t really contributing to combustion or heat release; rather, it’s acting as a diluent. Plus, it has a high specific heat capacity, which means it pretty much acts like an energy sink by sucking away the heat release and reducing the overall flame temperature. By removing the water, we were left with a mixture of pure hydrogen and dry air, which we called C1d. C1d has nitrogen acting as the diluent in the mixture, but no vitiation (i.e., no water vapor). To evaluate other interactions and gather some comparison data, we also tested a H2/O2 mixture; this final variation was called C1p.
Since we wanted to study the influence of both gas-phase (homogeneous) and surface (heterogeneous) chemistry on the FA & DDT process, we decided to use CONVERGE for the CFD part of this study. The kind of detonation problems that we are studying require highly resolved meshes and Adaptive Mesh Refinement (AMR) to capture the flame front. In that sense, CONVERGE was the ideal choice, since it has the high-quality meshing capabilities we needed, as well as the option to include coupled homogeneous and heterogeneous surface chemistry.
To begin, we used CONVERGE to solve the governing multi-component reacting Navier-Stokes equations, accomplished through a collocated finite volume method (FVM), which conserves mass, momentum, total energy, and the species’ mass-fractions on a discretized mesh consisting of many cells. The velocities at the cell faces were obtained using a blended central and upwind scheme (i.e., the flux-blending scheme), where cell-face velocities represent weighted sums of upwinded (i.e., first-order accurate) and cell-averaged (i.e., second-order accurate) velocities. The Pressure Implicit with Splitting of Operators (PISO) scheme was employed to capture pressure-velocity coupling, while the Rhie-Chow interpolation scheme was used to avoid potential “checkerboarding” issues with the collocated grid.2 CONVERGE’s biconjugate gradient stabilized (BiCGSTAB) linear solver was used for the pressure Poisson equation, a reformulation of the Navier-Stokes equations that allowed us to directly calculate pressure by decoupling pressure from the velocity field. Additionally, we used the SAGE detailed chemical kinetics solver to solve the gas-phase and surface combustion reactions. SAGE solved the surface coverages and gas-phase mass fractions, enabling coupled gas-phase/surface reactions at the wall.
CONVERGE’s AMR helped us refine the mesh in areas of greater computational complexity and coarsen the mesh in others. We chose not to use AMR in Case C1, due to the large flame thickness (δf= 700μm). For the purposes of this study, cells were refined according to the local cell temperature. To ensure finer meshes on the accelerating flame front, we only employed AMR when the cell temperature fell in the range of 800-1900 K. For Case C1d, we applied AMR on top of the base mesh resolution to ensure six cells spanned the small flame thickness (δf= 27μm). The final mixture, Case C1p, had an even smaller flame thickness of δf = 20 μm, so we further refined the mesh to achieve 16 points across the flame thickness, ensuring adequate resolution of the flame structure.
Next, we performed several validation studies for CONVERGE’s gas-phase and surface chemistry mechanisms to enhance confidence in our simulation results. For example, CONVERGE’s gas-phase SAGE detailed chemistry solver and its hydrodynamic coupling was compared with results from the PeleC solver, an open-source CFD code used for combustion applications. Validation results are shown in Figure 1.

CONVERGE’s surface chemistry module was validated against Chen et al.3, a well-cited paper that simulates a catalytic micro-tube with gas-phase and surface reactions for premixed H2/air mixtures. This publication described a simple catalytic combustion study focusing on flame stabilization, rather than FA/DDT. CONVERGE’s results matched well with those of the paper.1
In Case C1, the flame did not exhibit acceleration, nor did it become a detonating flame. Rather, it simply propagated with a constant flamespeed. However, compared to the traditionally observed parabolic-like flame front profile, the flame inverted whenever surface chemistry was active (i.e., when the chemical reactions at the surface were explicitly modeled and accounted for), as seen in Figure 2. This reflects the preferential propagation of the flame along the walls due to catalytic surface chemistry.

However, when surface chemistry was disabled, the flame returned to the traditional parabolic shape, as shown in Figure 3.

After finding a strong production of the intermediate radicals OH and O along the wall surface, we concluded that catalytic surface reactions promote preferential propagation of the flame via the production of reactive intermediates that directly promote gas-phase combustion. In other words, the flame propagates along the catalytic walls due to the surface reactions from the fuel/oxidizer mixture and the intermediate radicals. We also found the temperature distribution for the C1 cases run with surface chemistry were higher than the ones run with gas-phase chemistry only. This is likely due to the fact that surface chemistry calculations take into account additional heat generated by surface reactions.
In all C1 cases, the flame did not exhibit acceleration. This is attributed to the presence of diluents and vitiation in the mixture, which lowers the flamespeed and inhibits FA/DDT.4 Therefore, the same simulation and analysis procedure was carried out for the C1d mixture. In this case, removing water from the mixture led to higher flamespeeds and FA, but not DDT. In contrast with the vitiated cases (C1), the flame inversion occurred only for the case where surface chemistry is enabled without gas-phase chemistry. In cases with gas-phase reactions, the flame became parabolic. The flame in all C1d cases accelerated to high speeds (i.e., around Mach 0.1). Unlike case C1, there was no flame propagation along the wall since the short residence time (i.e., the time available for surface chemistry to couple with gas-phase chemistry) reduced the effect of catalytic walls. The C1d cases exhibited rapid FA, but did not reach DDT. We believe this is due to the long DDT run-up distance (i.e., the distance required for the flame to undergo the DDT process). On the other hand, the C1p cases exhibited rapid DDT after forming a tulip-like flame front in the initial stages. Both flame branches propagated preferentially along the wall before eventually uniting, forming a detonation front, as shown in Figure 4.

Thanks, Surya! To recap, Surya and his team, along with researchers from ExxonMobil Technology, used CONVERGE to simulate the propagation and acceleration of H2/O2 and H2/air flames for three different fuel mixtures over catalytic nickel walls. Each mixture responded differently to the interplay between surface and gas-phase chemistry, resulting in varying outcomes in terms of FA and DDT. Read more about Surya’s research in his paper!
Overall, this study was the first in the field to consider coupled gas-phase and surface reactions in catalytic nickel microchannels for assessing DDT. These findings have the potential to drive more specific studies tailored to industrial scenarios to improve explosion safety.
[1] Ramachandran, S., et al. “Flame Acceleration and Deflagration to Detonation Transition in a Microchannel with Catalytic Nickel Walls.” Physics of Fluids, 36(11), 2024, 116-143. https://doi.org/10.1063/5.0235540
[2] Zhang, S., Zhao, X., and Bayyuk, S., “Generalized Formulations for the Rhie–Chow Interpolation.” Journal of Computational Physics, 258, 2014, 880–914. https://doi.org/10.1016/j.jcp.2013.11.006
[3] Chen, G.-B., et al. “Effects of Catalytic Walls on Hydrogen/Air Combustion inside a Micro-Tube.” Applied Catalysis A: General, 332(1), 2007, 89–97 https://doi.org/10.1016/j.apcata.2007.08.011 [4] Ramachandran, S., Srinivasan, N., Wang, Z., Behkish, A. and Yang, S., “A Numerical Investigation of Deflagration Propagation and Transition to Detonation in a Microchannel With Detailed Chemistry: Effects of Thermal Boundary Conditions and Vitiation.” Physics of Fluids, 35(7), 2023, https://doi.org/10.1063/5.0155645
Author:
Elizabeth Favreau
Marketing Writing Team Lead
It’s hard to beat the thrill of a NASCAR race. The roaring of engines as cars careen around the track as mere blurs, the deafening cheers of the fans, the animated voices of the announcers booming over the din. The atmosphere is electric, and excitement is palpable in the air as cars flash across the finish line.
Guided by the deft hands of the drivers, the race cars are propelled by powerful engines to mindboggling speeds—exceeding 200 mph on some tracks. The engine is the heart of the car, and it can easily make or break a race. Even minor tweaks to the engine can provide the small boost of power needed to best the competition.
Figuring out what tweaks to make, however, is not always easy. Exploring many different designs can be expensive, not just in terms of money, but also time—and time is a highly valued commodity in the racing world. With dozens of races each season, and each one in need of a specialized engine, being able to efficiently assess different design options is key.

Roush Yates Engines designs, tests, and builds purpose-built race engines for the NASCAR Cup Series and the NASCAR Xfinity Series. Founded in 2004 and headquartered in North Carolina, Roush Yates is the exclusive engine builder to Ford Performance. With nearly 400 wins across the two NASCAR series, Roush Yates is regularly powering cars to victory and championships. So how do they do it? In addition to state-of-the-art test facilities and a team of brilliant engineers and technicians, incorporating advanced modeling software like CONVERGE into their design process is one of their key strategies for winning.
Designing racing engines is obviously a different beast than designing engines for everyday passenger vehicles. Each engine must be tailored to the specific tracks where it will be raced, with the goal of eking out every bit of performance possible. To achieve this, you need to consider a variety of factors, including the length of the track (typically ranging from 1/2 mile to over 2 miles), the vehicle traction available, differences in driver style, climate conditions, and even elevation.
“It’s very interesting to design for those types of different environments to make sure we’re doing the most we can to bring the best engine we can to each track,” says Jamie McNaughton, Technical Director at Roush Yates Engines.
Power isn’t the only necessity in a racing engine, either; the engines also need to be durable. While these engines won’t be racking up hundreds of thousands of miles, they need to be at peak performance while being driven under extreme conditions for up to three races and numerous practice sessions, which can add up to some 1,500 miles. All the power in the world won’t help you win if your engine breaks down mid-race!
So, you need performance, reliability, and durability. No pressure, right? Now add in the fact that you’re also working on a very short timeline. While the design cycle for a passenger vehicle engine might be on order of three years, in the NASCAR world, you’re working with timelines as short as 8-12 months. And there’s a lot that needs to be packed into those months, from planning and analysis to testing and production—any tools that can help speed up your design process can be a major advantage.
So how does Roush Yates leverage CFD in their engine design process?
Per the rules of NASCAR racing, manufacturers are working with homologated parts, i.e., parts that have been officially approved by the organization. Manufacturers can tweak these parts, but they can’t go off and make something brand new. That means that Roush Yates’ engineers are working within well-defined boundaries to try to find minor modifications that result in small but meaningful gains in power and performance.
This is where CFD shines. “Finding the last 0.5% that we’re looking for requires comprehensive 3D modeling,” says Jamie.
Roush Yates uses CONVERGE to model a variety of powertrain components, including intake manifolds, cylinder head ports, exhaust systems, intake systems, and cooling systems. To improve the engine’s gas exchange process, they use CONVERGE to analyze intake manifold flow losses, tune the manifold, and model the exhaust systems. Furthermore, they conduct cooling system evaluations to ensure that the coolant flow rate and system pressure are correct for the engine specifications and the tracks being raced.
“We’ve found CONVERGE’s combustion modeling and meshing technique to be very advantageous for complex geometries and transient simulations,” says Jamie. “Our main goal at Roush Yates is to have the highest power, efficiency, and the most reliable engines in NASCAR. Working toward these goals, we have continuously improved in all these areas throughout the race season with the help of CONVERGE.”
CFD also helps Roush Yates accelerate their development efforts to meet the rapid design cycles required by the sport. The power of simulation lies in the ability to test many different design iterations before manufacturing any components. Compared to physical prototyping, CFD simulations are relatively fast and cheap, and virtually modifying the designs of the components can be done in a matter of clicks.
CONVERGE’s autonomous meshing makes it fast and simple to set up many different cases, because you don’t need to manually create any meshes. This allows you to analyze dozens or even hundreds of design options to determine which ones are the most promising. Only needing to build and test a much smaller number of components leads to a faster time to the track. Moreover, being able to explore so many designs allows you to find those small increases in performance that can end up providing a big advantage on the track.
“CONVERGE enables rapid setup of simulation models, and it has a fast learning curve—new analysts can be brought up to speed on CONVERGE in a matter of weeks,” says Jamie. “Additionally, the more recent versions of CONVERGE have runtimes that scale very well on CPUs. The values of speed and simplicity are some of the most essential capabilities for a CFD tool in the motorsport industry.”
For Roush Yates, their advanced design techniques clearly pay off. Boasting 12 NASCAR Cup Series championships, 17 NASCAR Xfinity Series championships, and hundreds of wins and poles across the two series, Roush Yates is at the top of the game in the motorsport industry. They employ more than 100 people in their engine shop, doing everything from design, simulation, building, and testing, in order to compete on an international stage in upward of 70 events each year.
As Jamie says, “It’s the kind of situation where if you have a job you really love, it’s not so much work as having a great time, continuing to learn and build a great team to achieve our goals.”
No one can say how the next race will unfold, but one thing’s for sure—we’ll continue to cheer on our partners at Roush Yates and do our best to support them on their NASCAR journey.
Learn more about Roush Yates’ engine design process at our upcoming webinar, The Power of CONVERGE for Race Engine Development at Roush Yates Engines, presented by Jamie on September 10 at 10:30am CDT! Register here.
Author:
Allie Yuxin Lin
Marketing Writer
In 2017, Convergent Science expanded to Pune, India, welcoming Ashish Joshi as the founding leader of our new office. Back then, the office was a quiet hub of possibility with plenty of open desks, a one-person team, and the excitement of building something new. We wrote a blog post back then, documenting the early days and the potential the office held. Fast forward eight years, and the office has become a bustling environment, filled with new ideas, forward-thinking people, and dynamic energy. Convergent Science India LLP has grown in not just size, but also spirit as we welcomed new colleagues, took on interesting projects, and worked to build a collaborative culture. But you know the ending, so let’s start at the beginning.

The initial purpose of the India office was to capture the internal combustion engine (ICE) CFD market in the Indian region. The India office was born in “Supreme HQ,” an office space with a maximum capacity of 12 employees. In August 2017, Ashish welcomed his first teammate, Kamlesh Patel.
“Being the first employee at Convergent Science India wasn’t just about joining early—it was about helping shape the foundation of something lasting,” says Kamlesh. “From navigating new challenges and giving training courses to growing alongside brilliant minds and forming lifelong friendships—this journey has been deeply personal and incredibly fulfilling. I’m proud of how far we’ve come, grateful for the people who made it possible, and excited for everything still to come.”
The two became the core of our Indian operations, with Kamlesh focusing primarily on ICE support. Soon after, Harshan Arumugam joined the team to explore how CONVERGE could break into new application areas beyond engines.
Like any start-up organization, the early days of Convergent Science India were riddled with challenges. Even small administrative tasks like opening company bank accounts or accounting for tax compliance were immense hurdles for the three-person team. Not to mention, CONVERGE awareness in India was minimal at the time, so the team had to educate the market while simultaneously training new engineers and building brand credibility.
As operations stabilized, the vision expanded. The team began exploring markets in neighboring Southeast Asia, carrying the CONVERGE message beyond India and across international waters. A major milestone was the successful organization of the first CONVERGE User Conference in the region in 2019. Strong support from the world headquarters played a crucial role in strengthening the company’s reputation and boosting the credibility of the India office. Through this newfound visibility, the India team was able to exert broader regional influence, quickly pulling in new CONVERGE customers.
About a year or so in, Ashish proposed a novel idea: to expand the India office to include functions outside of technical support. The leadership team in Madison approved, and soon, the team began looking to fill roles in Marketing, Documentation, Testing & Validation, Development, and more. By and by, the India office evolved from a small service branch to a full-fledged contributor to Convergent Science’s global operations.
It wasn’t long before our expansion efforts started to show. By 2025, Convergent Science India has become a diverse, cross-functional powerhouse, whose headcount of 36 employees is a powerful marker of how far we’ve come.

“Being not only the newest but also the youngest employee, I feel excited to be working here and learning about CFD. As my first full time position after college, I had no idea what to expect, but I felt both welcomed and challenged,” says Rohit Kamath, the office’s newest hire. “The environment is lively and everyone is so knowledgeable and approachable. Not to mention, the office events, like diya painting or pottery workshops, keep our day-to-day life fresh and exciting. I’m grateful for the sense of community I’ve found here. Whether it’s lunch breaks to the nearby coffee shop for the best 35 ₹ ($0.40) cup of coffee or working on new marketing applications, there’s always something new to learn and grow from.”
As new hires poured in, the original Supreme HQ office space started filling up, proof that our ambitions were quickly outgrowing our humble beginnings. As such, we moved to a larger space at IndiQube Unity Tower. The new office provided more room for our rapidly expanding team. After overseeing the move to the new office building and getting things established there, Ashish moved to the U.S. to pursue a different role within the company. He passed the managerial reins to Yajuvendra Shekhawat (Yaju), who is now the India office’s general manager.
Under Yaju’s guidance, the India team has made inroads into application areas beyond ICEs, although they remain the largest component of our market in the Indian region. Turbomachinery is an emerging focus, and the office has also had success in the oil and gas industry. Simultaneously, the team has been actively encouraging existing clients to use CONVERGE for applications beyond ICEs, broadening our solver’s foothold in R&D environments. The office has also built strong relationships with leading academic institutions, particularly across the IIT system. Many of these prestigious institutions now use CONVERGE for a wide variety of research applications. On the industrial front, the office also works with most of the major automotive companies in India that are engaged in IC engine R&D.
With the current office at IndiQube Unity Tower now nearing capacity, Yaju and his team are actively exploring options for the next phase of expansion. For now, there’s still room to grow, and a lease renewal is under consideration for September 2025. As the team continues to expand in both size and scope, it’s clear that the India office will remain a vital part of Convergent Science’s global operations.
“In 2017, I joined Convergent Science India, fresh out of university with a passion for internal combustion engines,” says Kamlesh. “Ironically, my first interaction with Ashish was a polite rejection for an internship at CS India due to limited resources. Life clearly had other plans. From a two-person team to a thriving office of nearly 40, I’ve had the privilege of growing with CS India from day one. Early on, I was delivering training courses to customers and universities, a challenge that pushed me out of my comfort zone and helped me grow. What makes CS India truly special is the people. From cricket and coffee to road trips and other milestones, we’ve built friendships that go far beyond work. A huge thanks to Ashish for setting the tone and Yaju for continuing to lead with empathy, integrity, and vision. They, along with our global leadership, have given all of us the freedom to grow, explore, and evolve not just as engineers, but as people.”

Kamlesh’s words are indicative of the India office’s vibrant and inclusive culture. The team understands that productivity isn’t just about working hard behind a screen; it can also look like birthday celebrations in the office, outdoor cricket games, and company walks to the nearest convenience store for coffee or other delightful snacks. As we look ahead to the near future, we’re excited to keep expanding, evolving, and writing the next chapter of our office’s story together.
Author:
Allie Yuxin Lin
Marketing Writer
In my first year of university, I became enamored with science fiction novels, particularly those dealing with the subgenre of time travel. During one of my literary pursuits, I came across the story of a 20th century nurse who manages to save the lives of many 16th century soldiers because she engineered a modern syringe from a viper’s hollow fang. While the modern hypodermic needle was not invented until the 1850s, the first syringe (not necessarily hypodermic) was created in 1650 based on Pascal’s Law, which states that a pressure applied at any point in a confined fluid will be directly transmitted throughout the fluid. I would later learn of another indispensable part of modern civilization that is also based on Pascal’s Law, and, you could say, transforming lives in its own way.
A piston pump is a type of reciprocating pump in which the reciprocating motion of a piston forms a chamber. When the pump expands, the chamber draws in fluid through a valve; when the pump contracts, the chamber expels fluid through a separate valve. A syringe’s plunger works by the same mechanism, as do hand soap dispensers, well pumps, bicycle pumps, and more. These machines have a simple design, which has allowed them to become a critical part of the oil and gas industry, where they are primarily used to transfer fluids at high pressures during extraction and processing operations. Their function as a positive displacement device, as well as their ability to generate high pressures and handle a wide range of fluid types, make piston pumps particularly attractive for the oil and gas industry. In particular, they are used in tasks such as well stimulation (including hydraulic fracturing and acidizing), mud pumping during drilling, chemical injection for corrosion inhibition, flow assurance, wellhead service, and high-pressure fluid transfer in pipelines and processing facilities.

Given their importance in industry, finding the right tools to model piston pumps can offer valuable insights into the design and application of these ubiquitous tools. However, piston pumps often involve complex moving boundaries, as well as intricate piston motion and valve dynamics, which may pose a challenge for simulation. These apparatuses are also prone to cavitation, which refers to the formation and collapse of vapor bubbles in the pump’s fluid. This happens when the working pressure inside the pump falls below the fluid’s vapor pressure, causing localized vaporization. When these bubbles collapse, they create shock waves that may lead to undesired vibrations, machinery damage, and reduced efficiency over time.
CONVERGE is a useful tool for piston pump simulations because it can efficiently overcome many of the challenges associated with these devices. Our solver automatically generates the computational mesh at each time-step, eliminating the need for complex re-meshing strategies. Adaptive Mesh Refinement (AMR) ensures high resolution where it is needed without incurring extensive computational costs. Fluid-structure interaction (FSI) modeling can accurately track the interaction between the piston, the valves, and surrounding fluid to predict pressure and flow behavior. Furthermore, CONVERGE includes several built-in cavitation models and multi-phase capabilities that help predict vapor formation, bubble collapse, and pressure spikes.
In this CONVERGE case study, we simulated a piston pump with plate valves to regulate the pressure and suction sides and compared our results to experimental data.1 In this geometry,2 the fluid (water) is induced by an oscillatory movement of the plunger. As the plunger reaches its minimum displacement, the pump begins its suction stroke; similarly, as the plunger reaches its maximum displacement, the pump begins its discharge stroke.
CONVERGE’s FSI modeling captured the dynamic relationship between the fluid and the plate valves, the pump chamber, and the suction and pressure pipes. The two-way coupled FSI approach predicted the rigid-body motion of the plate valves resulting from the balance between the fluid load and suction pressure on one side and the spring loads on the other. In this study, both forces were set up as 1DOF FSI objects, i.e., they could only move translationally, along the x-axis. The FSI spring feature models spring forces between a fixed object and a rigid FSI object (valve). The model approximates the force of a linear coil spring, with specified parameters for stiffness, damping constant, length, and pre-load.
Other CONVERGE features that aided in this simulation include the RNG k-epsilon model, which accounted for the turbulent flow in the pump. The phase change between the liquid and vapor phases was captured using cavitation modeling, specifically, the homogenous relaxation model (HRM). HRM predicts the mass exchange between the liquid and vapor and describes the rate at which the liquid-vapor mass interchange approaches equilibrium. In this case, we used time scale coefficients for the condensation and evaporation of water to predict mass flow rate and discharge.
For a more accurate simulation, velocity- and void fraction-based AMR were applied to refine and coarsen the mesh depending on the resolution requirements. In addition, fixed embedding was employed around the valves and the piston crown to maintain a fine resolution while keeping the rest of the grid coarse. Pressure-velocity coupling was captured with the Pressure Implicit Splitting of Operators (PISO) scheme, which performs the PISO algorithm in a loop until it reaches a user-specified PISO tolerance value.
Overall, there was good agreement between the experimental values and CONVERGE data, as measured by the valve lift. In addition to accurately capturing the amount of displaced volume in the pump, our simulation effectively predicted compressibility effects.

Much like the inventive syringe, piston pumps—which are rooted in the same scientific principles—are an indispensable part of modern industry. Their simple yet powerful design, based on Pascal’s Law, allows them to perform critical tasks in the oil and gas sector, in spite of challenges such as multi-phase dynamics and cavitation. In this case study, we leveraged CONVERGE’s innovative tools, including FSI and multi-phase flow modeling, to simulate two-phase flow in a reciprocating displacement pump incorporating fluid-actuated valve movement. Advanced simulations such as the one outlined in this blog help refine our understanding of piston pumps, ensuring they continue to function efficiently and effectively under all circumstances.
[1] Anciger, D., “Numerische Simulation der Fluid-Struktur-Interaktion fluidgesteuerter Ventile in oszillierenden Verdrängerpumpen.” Ph.D. thesis, Technische Universität München, Munich, Germany, 2012.
[2] Deimel, C., et al. “Numerical 3D Simulation of the Fluid-Actuated Valve Motion in a Positive Displacement Pump with Resolution of the Cavitation-Induced Shock Dynamics.” Eighth International Conference on Computational Fluid Dynamics (ICCFD8), ICCFD8-2014-0433, Chengdu, China, July 14-18, 2014. DOI: 10.13140/2.1.3443.2326
Author:
Allie Yuxin Lin
Marketing Writer
Allow me to paint a picture for you. You’re an auto manufacturer, and you realize that the increased demand for fuel efficiency is pushing the industry toward new engine designs that can reduce fuel consumption while abiding by stricter governmental regulations on emissions. To accommodate this, you must follow the industry standard and rely on both experimental prototyping and numerical modeling. As you learn more about numerical simulation, you see that there are two approaches that you could take, so you start exploring these in depth. The design of experiments (DoE) technique explores the design space through many simulations and creates a response surface to optimize outcomes. This approach allows you to run many concurrent simulations to achieve quick design times. However, traditional linear regression-based response surface methods (RSMs) are unable to capture the complex, non-linear interactions in engine combustion. The second option involves the application of genetic algorithms (GAs), which optimize designs through multiple simulations over many generations.1 Your research shows that the GA method is very effective at exploring optimal design strategies, but it typically requires many generations to converge, leading to an extended design turnaround of up to several months.
Now you’re facing a difficult predicament. You have two options in front of you, one which will solve the problem within a reasonable timeframe but might miss out on the optimal solution, and another that is robust but computationally costly.
Enter machine learning (ML) optimization. Offering rapid project turnaround, cost-efficiency, and knowledge of the full design space, ML optimization is a game-changer in the field.2 Trained on DoE data, the ML tool has access to a wealth of information across the entire design space that would not be obtained through traditional sequential optimization methods. With a sufficiently complex ML model, you can capture the non-linear relationships that a DoE alone cannot, while keeping the optimization turnaround time low.
In previous versions of our software, optimization could be accomplished through our in-house CONVERGE Genetic Optimization (CONGO) utility, which enables you to run a GA optimization or a DoE interrogation study. A GA takes a survival-of-the-fittest approach to optimize a design, in which input parameters are randomly generated to create a population of parameters with the highest user-defined merit.
In late 2024, we released an ML tool in CONVERGE Studio that enables rapid optimization. First, you will identify the parameters that you want to vary during your optimization study (e.g., injection pressure, EGR ratio), and define the performance metrics you will use to assess the merit of your simulation results (e.g., minimum fuel consumption, minimum NOx emissions). The tool will then initialize a DoE by systematically generating a set of input variables for CONVERGE simulations that span your design space. A Latin hypercube sampling approach can be used to maximize the minimum distance between DoE sample points, producing a quasi-random sample that better captures the underlying data distribution compared to a random sample. After generating input files for the DoE, CONVERGE users can run their cases concurrently on CONVERGE Horizon, our cloud computing service that provides affordable, on-demand access to the latest high-performance computing (HPC) technologies.
The results from the DoE can now serve as the training data for the ML model. Since the most appropriate ML algorithm for a particular set of data cannot be determined a priori, the ML tool will combine several different algorithms through ensemble learning: ridge regression, random forest, gradient boosting, support vector machine, and neural network. This ML meta learning model will identify the combination of the five algorithms that best emulates the CFD setup. You can then use the trained ML meta model to predict the optimal case, evaluated with your predefined performance metrics. Finally, you can run the predicted best case in CONVERGE to confirm the results.
The ML tool offers a streamlined process for rapid and accurate optimization. The goal is not to replace CFD with ML, but rather to use ML in conjunction with CFD to enable fast, optimization-based design. A simplified schematic of the process can be seen in Figure 1.

While CONVERGE’s ML tool can be called within a user-defined function (UDF) for different purposes, such as reduced-order modeling, the approach is primarily targeted for design optimization. Its flexibility and ease of use enables the tool to process copious amounts of data, uncover nuanced patterns, and provide actionable insights.
To increase the efficiency of internal combustion engines, we partnered with Polaris and Oracle Cloud in 2021 to combine ML, CFD, and HPC for an exhaust port optimization study.
After identifying five exhaust port parameters to vary and parametrizing the geometry, the team used Latin hypercube sampling to set up a DoE study with 256 cases. The cases were run on CONVERGE Horizon in less than a day. We separated the wealth of data generated by the DoE study to train (using 90% of the DoE data) and test (using 10% of the DoE data) an ML emulator. This two-step process ensures the ML emulator can genuinely predict designs, rather than simply regurgitating the data from the DoE. Having confirmed the efficacy of the ML emulator, the team then used the trained emulator to predict the optimal case that minimized the exhaust port pumping work. The optimization study produced a small yet significant improvement in exhaust port efficiency. With traditional methods, an experimental optimization would have been far more expensive and taken significantly more time. However, thanks to the use of ML and HPC, this study was completed in a few days rather than several months. For more information, read our blog, which goes into detail about the design, methodology, results, and future outlooks of this study.
Harnessing wind energy is a cornerstone of the global agenda toward sustainability, since it provides a renewable power source with minimal environmental impact. Advancements in wind turbine technology enable the establishment of wind farms, which can generate significantly more power than a single turbine.
Wind farm layout can influence overall energy output, operational efficiency, and total project costs. In a poorly laid out wind farm, wake effects generated by upwind turbines may decrease the performance of downwind turbines. In such scenarios, ML can help optimize wind farm layout by accurately predicting turbine interactions to ensure each turbine receives optimal wind flow.
For a wind farm of 25 NREL 5MW wind turbines with constant wind speed and neutral atmospheric conditions, CONVERGE’s ML tool optimized the layout of the center five turbines for maximum power production. A DoE study produced the data to train the ensemble ML model, which was used to predict the optimal layout. The ML model, which was fully trained in 1 minute, returned four optimums, which were run in CONVERGE to confirm the configuration that produced the most power. Figure 2 shows the optimized wind farm layout, where the turbines in the center row are staggered.

Having concluded your research, you breathe a sigh of relief. CONVERGE’s ML tool has the potential to not only transform the engine industry, but also impart important insights in applications such as wind farm layout and reduced-order modeling. By training the model with DoE data, you have access to the entire design space and can uncover hidden patterns that were previously out of reach. With the speed and flexibility of CONVERGE’s ML tool, you no longer have to choose between quick results and accuracy—you could have both.
[1] Pei, Y., Pal, P., Zhang, Y., Traver, M., Cleary, D., Futterer, C., Brenner, M., Probst, D., and Som, S., “CFD-Guided Combustion System Optimization of a Gasoline Range Fuel in a Heavy-Duty Compression Ignition Engine Using Automatic Piston Geometry Generation and a Supercomputer,” SAE Technical Paper 2019-01-0001, 2019, doi:10.4271/2019-01-0001.
[2] Moiz, A.A., Pal, P., Probst, D., Pei, Y., Zhang, Y., Som, S., and Kodavasal, J., “A Machine Learning-Genetic Algorithm (ML-GA) Approach for Rapid Optimization Using High-Performance Computing,” SAE Technical Paper 2018-01-0190, 2018, doi:10.4271/2018-01-0190
This content is password protected. To view it please enter your password below:
Graphcore has used a range of technologies from Mentor, a Siemens business, to successfully design and verify its latest M2000 platform based on the Graphcore Colossus™ GC200 Intelligence Processing Unit (IPU) processor.
Simcenter™ FLOEFD™ software, a CAD-embedded computational fluid dynamics (CFD) tool is part of the Simcenter portfolio of simulation and test solutions that enables companies optimize designs and deliver innovations faster and with greater confidence. Simcenter FLOEFD helps engineers simulate fluid flow and thermal problems quickly and accurately within their preferred CAD environment including NX, Solid Edge, Creo or CATIA V5. With this release, Simcenter FLOEFD helps users create thermal models of electronics packages easily and quickly. Watch this short video to learn how.
Simcenter™ FLOEFD™ software, a CAD-embedded computational fluid dynamics (CFD) tool is part of the Simcenter portfolio of simulation and test solutions that enables companies optimize designs and deliver innovations faster and with greater confidence. Simcenter FLOEFD helps engineers simulate fluid flow and thermal problems quickly and accurately within their preferred CAD environment including NX, Solid Edge, Creo or CATIA V5. With this release, Simcenter FLOEFD allows users to add a component into a direct current (DC) electro-thermal calculation by the given component’s electrical resistance. The corresponding Joule heat is calculated and applied to the body as a heat source. Watch this short video to learn how.
Simcenter™ FLOEFD™ software, a CAD-embedded computational fluid dynamics (CFD) tool is part of the Simcenter portfolio of simulation and test solutions that enables companies optimize designs and deliver innovations faster and with greater confidence. Simcenter FLOEFD helps engineers simulate fluid flow and thermal problems quickly and accurately within their preferred CAD environment including NX, Solid Edge, Creo or CATIA V5. With this release, the software features a new battery model extraction capability that can be used to extract the Equivalent Circuit Model (ECM) input parameters from experimental data. This enables you to get to the required input parameters faster and easier. Watch this short video to learn how.
Simcenter™ FLOEFD™ software, a CAD-embedded computational fluid dynamics (CFD) tool is part of the Simcenter portfolio of simulation and test solutions that enables companies optimize designs and deliver innovations faster and with greater confidence. Simcenter FLOEFD helps engineers simulate fluid flow and thermal problems quickly and accurately within their preferred CAD environment including NX, Solid Edge, Creo or CATIA V5. With this release, Simcenter FLOEFD allows users to create a compact Reduced Order Model (ROM) that solves at a faster rate, while still maintaining a high level of accuracy. Watch this short video to learn how.
High semiconductor temperatures may lead to component degradation and ultimately failure. Proper semiconductor thermal management is key for design safety, reliability and mission critical applications.
A common question from Tecplot 360 users centers around the hardware that they should buy to achieve the best performance. The answer is invariably, it depends. That said, we’ll try to demystify how Tecplot 360 utilizes your hardware so you can make an informed decision in your hardware purchase.
Let’s have a look at each of the major hardware components on your machine and show some test results that illustrate the benefits of improved hardware.
Our test data is an OVERFLOW simulation of a wind turbine. The data consists of 5,863 zones, totaling 263,075,016 elements and the file size is 20.9GB. For our test we:
The test was performed using 1, 2, 4, 8, 16, and 32 CPU-cores, with the data on a local HDD (spinning hard drive) and local SSD (solid state disk). Limiting the number of CPU cores was done using Tecplot 360’s ––max-available-processors command line option.
Data was cleared from the disk cache between runs using RamMap.
Advice: Buy the fastest disk you can afford.
In order to generate any plot in Tecplot 360, you need to load data from a disk. Some plots require more data to be loaded off disk than others. Some file formats are also more efficient than others – particularly file formats that summarize the contents of the file in a single header portion at the top or bottom of the file – Tecplot’s SZPLT is a good example of a highly efficient file format.
We found that the SSD was 61% faster than the HDD when using all 32 CPU-cores for this post-processing task.
All this said – if your data are on a remote server (network drive, cloud storage, HPC, etc…), you’ll want to ensure you have a fast disk on the remote resource and a fast network connection.
With Tecplot 360 the SZPLT file format coupled with the SZL Server could help here. With FieldView you could run in client-server mode.

Advice: Buy the fastest CPU, with the most cores, that you can afford. But realize that performance is not always linear with the number of cores.
Most of Tecplot 360’s data compute algorithms are multi-threaded – meaning they’ll use all available CPU-cores during the computation. These include (but are not limited to): Calculation of new variables, slices, iso-surfaces, streamtraces, and interpolations. The performance of these algorithms improves linearly with the number of CPU-cores available.
You’ll also notice that the overall performance improvement is not linear with the number of CPU-cores. This is because loading data off disk becomes a dominant operation, and the slope is bound to asymptote to the disk read speed.

You might notice that the HDD performance actually got worse beyond 8 CPU-cores. We believe this is because the HDD on this machine was just too slow to keep up with 16 and 32 concurrent threads requesting data.
It’s important to note that with data on the SSD the performance improved all the way to 32 CPU-cores. Further reinforcing the earlier advice – buy the fastest disk you can afford.
Advice: Buy as much RAM as you need, but no more.
You might be thinking: “Thanks for nothing – really, how much RAM do I need?”
Well, that’s something you’re going to have to figure out for yourself. The more data Tecplot 360 needs to load to create your plot, the more RAM you’re going to need. Computed iso-surfaces can also be a large consumer of RAM – such as the iso-surface computed in this test case.
If you have transient data, you may want enough RAM to post-process a couple time steps simultaneously – as Tecplot 360 may start loading a new timestep before unloading data from an earlier timestep.
The amount of RAM required is going to be different depending on your file format, cell types, and the post-processing activities you’re doing. For example:
When testing the amount of RAM used by Tecplot 360, make sure to set the Load On Demand strategy to Minimize Memory Use (available under Options>Performance).

This will give you an understanding of the minimum amount of RAM required to accomplish your task. When set to Auto Unload (the default), Tecplot 360 will maintain more data in RAM, which improves performance. The amount of data Tecplot 360 holds in RAM is dictated by the Memory threshold (%) field, seen in the image above. So you – the user – have control over how much RAM Tecplot 360 is allowed to consume.
Advice: Most modern graphics cards are adequate, even Intel integrated graphics provide reasonable performance. Just make sure you have up to date graphics drivers. If you have an Nvidia graphics card, favor the “Studio” drivers over the “Game Ready” drivers. The “Studio” drivers are typically more stable and offer better performance for the types of plots produced by Tecplot 360.
Many people ask specifically what type of graphics card they should purchase. This is, interestingly, the least important hardware component (at least for most of the plots our users make). Most of the post-processing pipeline is dominated by the disk and CPU, so the time spent rendering the scene is a small percentage of the total.
That said – there are some scenes that will stress your graphics card more than others. Examples are:
Note that Tecplot 360’s interactive graphics performance currently (2023) suffers on Apple Silicon (M1 & M2 chips). The Tecplot development team is actively investigating solutions.
As with most things in life, striking a balance is important. You can spend a huge amount of money on CPUs and RAM, but if you have a slow disk or slow network connection, you’re going to be limited in how fast your post-processor can load the data into memory.
So, evaluate your post-processing activities to try to understand which pieces of hardware may be your bottleneck.
For example, if you:
And again – make sure you have enough RAM for your workflow.
The post What Computer Hardware Should I Buy for Tecplot 360? appeared first on Tecplot Website.
Three years after our merger began, we can report that the combined FieldView and Tecplot team is stronger than ever. Customers continue to receive the highest quality support and new product releases and we have built a solid foundation that will allow us to continue contributing to our customers’ successes long into the future.
This month we have taken another step by merging the FieldView website into www.tecplot.com. Our social media outreach will also be combined. Stay up to date with news and announcements by subscribing and following us on social media.

Members of Tecplot 360 & FieldView teams exhibit together at AIAA SciTech 2023. From left to right: Shane Wagner, Charles Schnake, Scott Imlay, Raja Olimuthu, Jared McGarry and Yves-Marie Lefebvre. Not shown are Scott Fowler and Brandon Markham.
It’s been a pleasure seeing two groups that were once competitors come together as a team, learn from each other and really enjoy working together.
– Yves-Marie Lefebvre, Tecplot CTO & FieldView Product Manager.
Our customers have seen some of the benefits of our merger in the form of streamlined services from the common Customer Portal, simplified licensing, and license renewals. Sharing expertise and assets across teams has already led to the faster implementation of modules such as licensing and CFD data loaders. By sharing our development resources, we’ve been able to invest more in new technology, which will soon translate to increased performance and new features for all products.
Many of the improvements are internal to our organization but will have lasting benefits for our customers. Using common development tools and infrastructure will enable us to be as efficient as possible to ensure we can put more of our energy into improving the products. And with the backing of the larger organization, we have a firm foundation to look long term at what our customers will need in years to come.
We want to thank our customers and partners for their support and continued investment as we endeavor to create better tools that empower engineers and scientists to discover, analyze and understand information in complex data, and effectively communicate their results.
The post FieldView joins Tecplot.com – Merger Update appeared first on Tecplot Website.
One of the most memorable parts of my finite-elements class in graduate school was a comparison of linear elements and higher-order elements for the structural analysis of a dam. As I remember, they were able to duplicate the results obtained with 34 linear elements by using a SINGLE high-order element. This made a big impression on me, but the skills I learned at that time remained largely unused until recently.
You see, my Ph.D. research and later work was using finite-volume CFD codes to solve the steady-state viscous flow. For steady flows, there didn’t seem to be much advantage to using higher than 2nd or 3rd order accuracy.
This has changed recently as the analysis of unsteady vortical flows have become more common. The use of higher-order (greater than second order) computational fluid dynamics (CFD) methods is increasing. Popular government and academic CFD codes such as FUN3D, KESTREL, and SU2 have released, or are planning to release, versions that include higher-order methods. This is because higher-order accurate methods offer the potential for better accuracy and stability, especially for unsteady flows. This trend is likely to continue.
Commercial visual analysis codes are not yet providing full support for higher-order solutions. The CFD 2030 vision states
“…higher-order methods will likely increase in utilization during this time frame, although currently the ability to visualize results from higher order simulations is highly inadequate. Thus, software and hardware methods to handle data input/output (I/O), memory, and storage for these simulations (including higher-order methods) on emerging HPC systems must improve. Likewise, effective CFD visualization software algorithms and innovative information presentation (e.g., virtual reality) are also lacking.”
The isosurface algorithm described in this paper is the first step toward improving higher-order element visualization in the commercial visualization code Tecplot 360.
Higher-order methods can be based on either finite-difference methods or finite-element methods. While some popular codes use higher-order finite-difference methods (OVERFLOW, for example), this paper will focus on higher-order finite-element techniques. Specifically, we will present a memory-efficient recursive subdivision algorithm for visualizing the isosurface of higher-order element solutions.
In previous papers we demonstrated this technique for quadratic tetrahedral, hexahedral, pyramid, and prism elements with Lagrangian polynomial basis functions. In this paper Optimized Implementation of Recursive Sub-Division Technique for Higher-Order Finite-Element Isosurface and Streamline Visualization we discuss the integration of these techniques into the engine of the commercial visualization code Tecplot 360 and discuss speed optimizations. We also discuss the extension of the recursive subdivision algorithm to cubic tetrahedral and pyramid elements, and quartic tetrahedral elements. Finally, we discuss the extension of the recursive subdivision algorithm to the computation of streamlines.
Click an image to view the slideshow
[See image gallery at www.tecplot.com]The post Faster Visualization of Higher-Order Finite-Element Data appeared first on Tecplot Website.
In this release, we are very excited to offer “Batch-Pack” licensing for the first time. A Batch-Pack license enables a single user access to multiple concurrent batch instances of our Python API (PyTecplot) while consuming only a single license seat. This option will reduce license contention and allow for faster turnaround times by running jobs in parallel across multiple nodes of an HPC. All at a substantially lower cost than buying additional license seats.

Data courtesy of ZJ Wang, University of Kansas, visualization by Tecplot.
Get a Free Trial Update Your Software
The post Webinar: Tecplot 360 2022 R2 appeared first on Tecplot Website.
Call 1.800.763.7005 or 425.653.1200
Email info@tecplot.com
Batch-mode is a term nearly as old as computers themselves. Despite its age, however, it is representative of a concept that is as relevant today as it ever was, perhaps even more so: headless (scripted, programmatic, automated, etc.) execution of instructions. Lots of engineering is done interactively, of course, but oftentimes the task is a known quantity and there is a ton of efficiency to be gained by automating the computational elements. That efficiency is realized ten times over when batch-mode meets parallelization – and that’s why we thought it was high-time we offered a batch-mode licensing model for Tecplot 360’s Python API, PyTecplot. We call them “batch-packs.”
Tecplot 360 batch-packs work by enabling users to run multiple concurrent instances of our Python API (PyTecplot) while consuming only a single license seat. It’s an optional upgrade that any customer can add to their license for a fee. The benefit? The fee for a batch-pack is substantially lower than buying an equivalent number of license seats – which makes it easier to justify outfitting your engineers with the software access they need to reach peak efficiency.
Here is a handy little diagram we drew to help explain it better:

Each network license allows ‘n’ seats. Traditionally, each instance of PyTecplot consumes 1 seat. Prior to the 2022 R2 release of Tecplot 360 EX, licenses only operated using the paradigm illustrated in the first two rows of the diagram above (that is, a user could check out up to ‘n’ seats, or ‘n’ users could check out a single seat). Now customers can elect to purchase batch-packs, which will enable each seat to provide a single user with access to ‘m’ instances of PyTecplot, as shown in the bottom row of the figure.
In addition to a cost reduction (vs. purchasing an equivalent number of network seats), batch-pack licensees will enjoy:
We’re excited to offer this new option and hope that our customers can make the most of it.
The post Introducing 360 “Batch-Packs” appeared first on Tecplot Website.
If you care about how you present your data and how people perceive your results, stop reading and watch this talk by Kristen Thyng on YouTube. Seriously, I’ll wait, I’ve got the time.
Which colormap you choose, and which data values are assigned to each color can be vitally important to how you (or your clients) interpret the data being presented. To illustrate the importance of this, consider the image below.

Figure 1. Visualization of the Southeast United States. [4]
Before I explain what a perceptually uniform colormap is, let’s start with everyone’s favorite: the rainbow colormap. We all love the rainbow colormap because it’s pretty and is recognizable. Everyone knows “ROY G BIV” so we think of this color progression as intuitive, but in reality (for scalar values) it’s anything but.
Consider the image below, which represents the “Estimated fraction of precipitation lost to evapotranspiration”. This image makes it appear that there’s a very distinct difference in the scalar value right down the center of the United States. Is there really a sudden change in the values right in the middle of the Great Plains? No – this is an artifact of the colormap, which is misleading you!

Figure 2. This plot illustrates how the rainbow colormap is misleading, giving the perception that there is a distinct different in the middle of the US, when in fact the values are more continuous. [2]
So let’s dive a little deeper into the rainbow colormap and how it compares to perceptually uniform (or perceptually linear) colormaps.
Consider the six images below, what are we looking at? If you were to only look at the top three images, you might get the impression that the scalar value has non-linear changes – while this value (radius) is actually changing linearly. If presented with the rainbow colormap, you’d be forgiven if you didn’t guess that the object is a cone, colored by radius.

Figure 3. An example of how the rainbow colormap imparts information that does not actually exist in the data.
So why does the rainbow colormap mislead? It’s because the color values are not perceptually uniform. In this image you can see how the perceptual changes in the colormap vary from one end to the other. The gray scale and “cmocean – haline” colormaps shown here are perceptually uniform, while the rainbow colormap adds information that doesn’t actually exist.

Figure 4. Visualization of the perceptual changes of three colormaps. [5]
Well, that depends. Tecplot 360 and FieldView are typically used to represent scalar data, so Sequential and Diverging colormaps will probably get used the most – but there are others we will discuss as well.
Sequential colormaps are ideal for scalar values in which there’s a continuous range of values. Think pressure, temperature, and velocity magnitude. Here we’re using the ‘cmocean – thermal’ colormap in Tecplot 360 to represent fluid temperature in a Barracuda Virtual Reactor simulation of a cyclone separator.

Diverging colormaps are a great option when you want to highlight a change in values. Think ratios, where the values span from -1 to 1, it can help to highlight the value at zero.

The diverging colormap is also useful for “delta plots” – In the plot below, the bottom frame is showing a delta between the current time step and the time average. Using a diverging colormap, it’s easy to identify where the delta changes from negative to positive.

If you have discrete data that represent things like material properties – say “rock, sand, water, oil”, these data can be represented using integer values and a qualitative colormap. This type of colormap will do good job in supplying distinct colors for each value. An example of this, from a CONVERGE simulation, can be seen below. Instructions to create this plot can be found in our blog, Creating a Materials Legend in Tecplot 360.

Perhaps infrequently used, but still important to point out is the “phase” colormap. This is particularly useful for values which are cyclic – such as a theta value used to represent wind direction in this FVCOM simulation result. If we were to use a simple sequential colormap (inset plot below) you would observe what appears to be a large gradient where the wind direction is 360o vs. 0o. Logically these are the same value and using the “cmocean – phase” colormap allows you communicate the continuous nature of the data.

There are times when you want to force a break in a continuous colormap. In the image below, the colormap is continuous from green to white
but we want to ensure that values at or below zero are represented as blue – to indicate water. In Tecplot 360 this can be done using the “Override band colors” option, in which we override the first color band to be blue. This makes the plot more realistic and therefore easier to interpret.

The post Colormap in Tecplot 360 appeared first on Tecplot Website.

Ansys has announced that it will acquire Zemax, maker of high-performance optical imaging system simulation solutions. The terms of the deal were not announced, but it is expected to close in the fourth quarter of 2021.
Zemax’s OpticStudio is often mentioned when users talk about designing optical, lighting, or laser systems. Ansys says that the addition of Zemax will enable Ansys to offer a “comprehensive solution for simulating the behavior of light in complex, innovative products … from the microscale with the Ansys Lumerical photonics products, to the imaging of the physical world with Zemax, to human vision perception with Ansys Speos [acquired with Optis]”.
This feels a lot like what we’re seeing in other forms of CAE, for example, when we simulate materials from nano-scale all the way to fully-produced-sheet-of-plastic-scale. There is something to be learned at each point, and simulating them all leads, ultimately, to a more fit-for-purpose end result.
Ansys is acquiring Zemax from its current owner, EQT Private Equity. EQT’s announcement of the sale says that “[w]ith the support of EQT, Zemax expanded its management team and focused on broadening the Company’s product portfolio through substantial R&D investment focused on the fastest growing segments in the optics space. Zemax also revamped its go-to-market sales approach and successfully transitioned the business model toward recurring subscription revenue”. EQT had acquired Zemax in 2018 from Arlington Capital Partners, a private equity firm, which had acquired Zemax in 2015. Why does this matter? Because the path each company takes is different — and it’s sometimes not a straight line.
Ansys says the transaction is not expected to have a material impact on its 2021 financial results.

Last year Sandvik acquired CGTech, makers of Vericut. I, like many people, thought “well, that’s interesting” and moved on. Then in July, Sandvik announced it was snapping up the holding company for Cimatron, GibbsCAM (both acquired by Battery Ventures from 3D Systems), and SigmaTEK (acquired by Battery Ventures in 2018). Then, last week, Sandvik said it was adding Mastercam to that list … It’s clearly time to dig a little deeper into Sandvik and why it’s doing this.
First, a little background on Sandvik. Sandvik operates in three main spheres: rocks, machining, and materials. For the rocks part of the business, the company makes mining/rock extraction and rock processing (crushing, screening, and the like) solutions. Very cool stuff but not relevant to the CAM discussion.
The materials part of the business develops and sells industrial materials; Sandvik is in the process of spinning out this business. Also interesting but …
The machining part of the business is where things get more relevant to us. Sandvik Machining & Manufacturing Solutions (SMM) has been supplying cutting tools and inserts for many years, via brands like Sandvik, SECO, Miranda, Walter, and Dormer Pramet, and sees a lot of opportunity in streamlining the processes around the use of specific tools and machines. Light weighting and sustainability efforts in end-industries are driving interest in new materials and more complex components, as well as tighter integration between design and manufacturing operations. That digitalization across an enterprise’s areas of business, Sandvik thinks, plays into its strengths.
According to info from the company’s 2020 Capital Markets Day, rocks and materials are steady but slow revenue growers. The company had set a modest 5% revenue growth target but had consistently been delivering closer to 3% — what to do? Like many others, the focus shifted to (1) software and (2) growth by acquisition. Buying CAM companies ticked both of those boxes, bringing repeatable, profitable growth. In an area the company already had some experience in.
Back to digitalization. If we think of a manufacturer as having (in-house or with partners) a design function, which sends the concept on to production preparation, then to machining, and, finally, to verification/quality control, Sandvik wants to expand outwards from machining to that entire world. Sandvik wants to help customers optimize the selection of tools, the machining strategy, and the verification and quality workflow.
The Manufacturing Solutions subdivision within SMM was created last year to go after this opportunity. It’s got 3 areas of focus: automating the manufacturing process, industrializing additive manufacturing, and expanding the use of metrology to real-time decision making.
The CGTech acquisition last year was the first step in realizing this vision. Vericut is prized for its ability to work with any CAM, machine tool, and cutting tool for NC code simulation, verification, optimization, and programming. CGTech is a long-time supplier of Vericut software to Sandvik’s Coromant production units, so the companies knew one another well. Vericut helps Sandvik close that digitalization/optimization loop — and, of course, gives it access to the many CAM users out there who do not use Coromant.
But verification is only one part of the overall loop, and in some senses, the last. CAM, on the other hand, is the first (after design). Sanvik saw CAM as “the most important market to enter due to attractive growth rates – and its proximity to Sandvik Manufacturing and Machining Solutions’ core business.” Adding Cimatron, GibbsCAM, SigmaTEK, and Mastercam gets Sandvik that much closer to offering clients a set of solutions to digitize their complete workflows.
And it makes business sense to add CAM to the bigger offering:
To head off one question: As of last week’s public statements, anyway, Sandvik has no interest in getting into CAD, preferring to leave that battlefield to others, and continue on its path of openness and neutrality.
And because some of you asked: there is some overlap in these acquisitions, but remarkably little, considering how established these companies all are. GibbsCAM is mostly used for production milling and turning; Cimatron is used in mold and die — and with a big presence in automotive, where Sandvik already has a significant interest; and SigmaNEST is for sheet metal fabrication and material requisitioning.
One interesting (to me, anyway) observation: 3D Systems sold Gibbs and Cimatron to Battery in November 2020. Why didn’t Sandvik snap it up then? Why wait until July 2021? A few possible reasons: Sandvik CEO Stefan Widing has been upfront about his company’s relative lack of efficiency in finding/closing/incorporating acquisitions; perhaps it was simply not ready to do a deal of this type and size eight months earlier. Another possible reason: One presumes 3D Systems “cleaned up” Cimatron and GibbsCAM before the sale (meaning, separating business systems and financials from the parent, figuring out HR, etc.) but perhaps there was more to be done, and Sandvik didn’t want to take that on. And, finally, maybe the real prize here for Sandvik was SigmaNEST, which Battery Ventures had acquired in 2018, and Cimatron and GibbsCAM simply became part of the deal. We may never know.
This whole thing is fascinating. A company out of left field, acquiring these premium PLMish assets. Spending major cash (although we don’t know how much because of non-disclosures between buyer and sellers) for a major market presence.
No one has ever asked me about a CAM roll-up, yet I’m constantly asked about how an acquirer could create another Ansys. Perhaps that was the wrong question, and it should have been about CAM all along. It’s possible that the window for another company to duplicate what Sandvik is doing may be closing since there are few assets left to acquire.
Sandvik’s CAM acquisitions haven’t closed yet, but assuming they do, there’s a strong fit between CAM and Sandvik’s other manufacturing-focused business areas. It’s more software, with its happy margins. And, finally, it lets Sandvik address the entire workflow from just after component design to machining and on to verification. Mr. Widing says that Sandvik first innovated in hardware, then in service – and now, in software to optimize the component part manufacturing process. These are where gains will come, he says, in maximizing productivity and tool longevity. Further out, he sees, measuring every part to see how the process can be further optimized. It’s a sound investment in the evolution of both Sandvik and manufacturing.
We all love a good reinvention story, and how Sandvik executes on this vision will, of course, determine if the reinvention was successful. And, of course, there’s always the potential for more news of this sort …

I missed this last month — Sandvik also acquired Cambrio, which is the combined brand for what we might know better as GibbsCAM (milling, turning), Cimatron (mold and die), and SigmaNEST (nesting, obvs). These three were spun out of 3D Systems last year, acquired by Battery Ventures — and now sold on to Sandvik.
This was announced in July, and the acquisition is expected to close in the second half of 2021 — we’ll find out on Friday if it already has.
At that time. Sandvik said its strategic aim is to “provide customers with software solutions enabling automation of the full component manufacturing value chain – from design and planning to preparation, production and verification … By acquiring Cambrio, Sandvik will establish an important position in the CAM market that includes both toolmaking and general-purpose machining. This will complement the existing customer offering in Sandvik Manufacturing Solutions”.
Cambrio has around 375 employees and in 2020, had revenue of about $68 million.
If we do a bit of math, Cambrio’s $68 million + CNC Software’s $60 million + CGTech’s (that’s Vericut’s maker) of $54 million add up to $182 million in acquired CAM revenue. Not bad.
More on Friday.

CNC Software and its Mastercam have been a mainstay among CAM providers for decades, marketing its solutions as independent, focused on the workgroup and individual. That is about to change: Sandvik, which bought CGTech late last year, has announced that it will acquire CNC Software to build out its CAM offerings.
According to Sandvik’s announcement, CNC Software brings a “world-class CAM brand in the Mastercam software suite with an installed base of around 270,000 licenses/users, the largest in the industry, as well as a strong market reseller network and well-established partnerships with leading machine makers and tooling companies”.
We were taken by surprise by the CGTech deal — but shouldn’t be by the Mastercam acquisition. Stefan Widing, Sandvik’s CEO explains it this way: “[Acquiring Mastercam] is in line with our strategic focus to grow in the digital manufacturing space, with special attention on industrial software close to component manufacturing. The acquisition of CNC Software and the Mastercam portfolio, in combination with our existing offerings and extensive manufacturing capabilities, will make Sandvik a leader in the overall CAM market, measured in installed base. CAM plays a vital role in the digital manufacturing process, enabling new and innovative solutions in automated design for manufacturing.” The announcement goes on to say, “CNC Software has a strong market position in CAM, and particularly for small and medium-sized manufacturing enterprises (SME’s), something that will support Sandvik’s strategic ambitions to develop solutions to automate the manufacturing value chain for SME’s – and deliver competitive point solutions for large original equipment manufacturers (OEM’s).”
Sandvik says that CNC Software has 220 employees, with revenue of $60 million in 2020, and a “historical annual growth rate of approximately 10 percent and is expected to outperform the estimated market growth of 7 percent”.
No purchase price was disclosed, but the deal is expected to close during the fourth quarter.
Sandvik is holding a call about this on Friday — more updates then, if warranted.

Bentley continues to grow its deep expertise in various AEC disciplines — most recently, expanding its focus in underground resource mapping and analysis. This diversity serves it well; read on.
In Q2,
Unlike AspenTech, Bentley’s revenue growth is speeding up (total revenue up 21% in Q2, including a wee bit from Seequent, and up 17% for the first six months of 2021). Why the difference? IMHO, because Bentley has a much broader base, selling into many more end industries as well as to road/bridge/water/wastewater infrastructure projects that keep going, Covid or not. CEO Greg Bentley told investors that some parts of the business are back to —or even better than— pre-pandemic levels, but not yet all. He said that the company continues to struggle in industrial and resources capital expenditure projects, and therefore in the geographies (theMiddle East and Southeast Asia) that are the most dependent on this sector. This is balanced against continued success in new accounts and the company’s reinvigorated selling to small and medium enterprises via its Virtuosity subsidiary — and in a resurgence in the overall commercial/facilities sector. In general, it appears that sales to contractors such as architects and engineers lag behind those to owners and operators of commercial facilities —makes sense as many new projects are still on pause until pandemic-related effects settle down.
One unusual comment from Bentley’s earnings call that we’re going to listen for on others: The government of China is asking companies to explain why they are not using locally-grown software solutions; it appears to be offering preferential tax treatment for buyers of local software. As Greg Bentley told investors, “[d]uring the year to date, we have experienced a rash of unanticipated subscription cancellations within the mid-sized accounts in China that have for years subscribed to our China-specific enterprise program … Because we don’t think there are product issues, we will try to reinstate these accounts through E365 programs, where we can maintain continuous visibility as to their usage and engagement”. So, to recap: the government is using taxation to prefer one set of vendors over another, and all Bentley can do (really) is try to bring these accounts back and then monitor them constantly to keep on top of emerging issues. FWIW, in the pre-pandemic filings for Bentley’s IPO, “greater China, which we define as the Peoples’ Republic of China, Hong Kong and Taiwan … has become one of our largest (among our top five) and fastest-growing regions as measured by revenue, contributing just over 5% of our 2019 revenues”. Something to watch.
The company updated its financial outlook for 2021 to include the recent Seequent acquisition and this moderate level of economic uncertainty. Bentley might actually join the billion-dollar club on a pro forma basis — as if the acquisition of Seequent had occurred at the beginning of 2021. On a reported basis, the company sees total revenue between $945 million and $960 million, or an increase of around 18%, including Seequent. Excluding Seequent, Bentley sees organic revenue growth of 10% to 11%.
Much more here, on Bentley’s investor website.

We still have to hear from Autodesk, but there’s been a lot of AECish earnings news over the last few weeks. This post starts a modest series as we try to catch up on those results.
AspenTech reported results for its fiscal fourth quarter, 2021 last week. Total revenue of $198 million in DQ4, down 2% from a year ago. License revenue was $145 million, down 3%; maintenance revenue was $46 million, basically flat when compared to a year earlier, and services and other revenue was $7 million, up 9%.
For the year, total revenue was up 19% to $709 million, license revenue was up 28%, maintenance was up 4% and services and other revenue was down 18%.
Looking ahead, CEO Antonio Pietri said that he is “optimistic about the long-term opportunity for AspenTech. The need for our customers to operate their assets safely, sustainably, reliably and profitably has never been greater … We are confident in our ability to return to double-digit annual spend growth over time as economic conditions and industry budgets normalize.” The company sees fiscal 2022 total revenue of $702 million to $737 million, which is up just $10 million from final 2021 at the midpoint.
Why the slowdown in FQ4 from earlier in the year? And why the modest guidance for fiscal 2022? One word: Covid. And the uncertainty it creates among AspenTech’s customers when it comes to spending precious cash. AspenTech expects its visibility to improve when new budgets are set in the calendar fourth quarter. By then, AspenTech hopes, its customers will have a clearer view of reopening, consumer spending, and the timing of an eventual recovery.
Lots more detail here on AspenTech’s investor website.
Next up, Bentley. Yup. Alphabetical order.
There is an interesting new trend in using Computational Fluid Dynamics (CFD). Until recently CFD simulation was focused on existing and future things, think flying cars. Now we see CFD being applied to simulate fluid flow in the distant past, think fossils.
CFD shows Ediacaran dinner party featured plenty to eat and adequate sanitation
Let's first address the elephant in the room - it's been a while since the last Caedium release. The multi-substance infrastructure for the Conjugate Heat Transfer (CHT) capability was a much larger effort than I anticipated and consumed a lot of resources. This lead to the relative quiet you may have noticed on our website. However, with the new foundation laid and solid we can look forward to a bright future.
Conjugate Heat Transfer Through a Water-Air Radiator
Simulation shows separate air and water streamline paths colored by temperature
It turns out that Computational Fluid Dynamics (CFD) has a key role to play in determining the behavior of long extinct creatures. In a previous, post we described a CFD study of parvancorina, and now Pernille Troelsen at Liverpool John Moore University is using CFD for insights into how long-necked plesiosaurs might have swum and hunted.
CFD Water Flow Simulation over an Idealized Plesiosaur: Streamline VectorsIllustration only, not part of the study
Fossilized imprints of Parvancorina from over 500 million years ago have puzzled paleontologists for decades. What makes it difficult to infer their behavior is that Parvancorina have none of the familiar features we might expect of animals, e.g., limbs, mouth. In an attempt to shed some light on how Parvancorina might have interacted with their environment researchers have enlisted the help of Computational Fluid Dynamics (CFD).
CFD Water Flow Simulation over a Parvancorina: Forward directionIllustration only, not part of the study
One of nature's smallest aerodynamic specialists - insects - have provided a clue to more efficient and robust wind turbine design.
Dragonfly: Yellow-winged DarterLicense: CC BY-SA 2.5, André Karwath
The recent attempt to break the 2 hour marathon came very close at 2:00:24, with various aids that would be deemed illegal under current IAAF rules. The bold and obvious aerodynamic aid appeared to be a Tesla fitted with an oversized digital clock leading the runners by a few meters.
2 Hour Marathon Attempt
In this post, I’ll give a simple example of how to create curves in blockMesh. For this example, we’ll look at the following basic setup:

As you can see, we’ll be simulating the flow over a bump defined by the curve:
First, let’s look at the basic blockMeshDict for this blocking layout WITHOUT any curves defined:
/*--------------------------------*- C++ -*----------------------------------*\
========= |
\\ / F ield | OpenFOAM: The Open Source CFD Toolbox
\\ / O peration | Website: https://openfoam.org
\\ / A nd | Version: 6
\\/ M anipulation |
\*---------------------------------------------------------------------------*/
FoamFile
{
version 2.0;
format ascii;
class dictionary;
object blockMeshDict;
}
// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
convertToMeters 1;
vertices
(
(-1 0 0) // 0
(0 0 0) // 1
(1 0 0) // 2
(2 0 0) // 3
(-1 2 0) // 4
(0 2 0) // 5
(1 2 0) // 6
(2 2 0) // 7
(-1 0 1) // 8
(0 0 1) // 9
(1 0 1) // 10
(2 0 1) // 11
(-1 2 1) // 12
(0 2 1) // 13
(1 2 1) // 14
(2 2 1) // 15
);
blocks
(
hex (0 1 5 4 8 9 13 12) (20 100 1) simpleGrading (0.1 10 1)
hex (1 2 6 5 9 10 14 13) (80 100 1) simpleGrading (1 10 1)
hex (2 3 7 6 10 11 15 14) (20 100 1) simpleGrading (10 10 1)
);
edges
(
);
boundary
(
inlet
{
type patch;
faces
(
(0 8 12 4)
);
}
outlet
{
type patch;
faces
(
(3 7 15 11)
);
}
lowerWall
{
type wall;
faces
(
(0 1 9 8)
(1 2 10 9)
(2 3 11 10)
);
}
upperWall
{
type patch;
faces
(
(4 12 13 5)
(5 13 14 6)
(6 14 15 7)
);
}
frontAndBack
{
type empty;
faces
(
(8 9 13 12)
(9 10 14 13)
(10 11 15 14)
(1 0 4 5)
(2 1 5 6)
(3 2 6 7)
);
}
);
// ************************************************************************* //
This blockMeshDict produces the following grid:

It is best practice in my opinion to first make your blockMesh without any edges. This lets you see if there are any major errors resulting from the block topology itself. From the results above, we can see we’re ready to move on!
So now we need to define the curve. In blockMesh, curves are added using the edges sub-dictionary. This is a simple sub dictionary that is just a list of interpolation points:
edges
(
polyLine 1 2
(
(0 0 0)
(0.1 0.0309016994 0)
(0.2 0.0587785252 0)
(0.3 0.0809016994 0)
(0.4 0.0951056516 0)
(0.5 0.1 0)
(0.6 0.0951056516 0)
(0.7 0.0809016994 0)
(0.8 0.0587785252 0)
(0.9 0.0309016994 0)
(1 0 0)
)
polyLine 9 10
(
(0 0 1)
(0.1 0.0309016994 1)
(0.2 0.0587785252 1)
(0.3 0.0809016994 1)
(0.4 0.0951056516 1)
(0.5 0.1 1)
(0.6 0.0951056516 1)
(0.7 0.0809016994 1)
(0.8 0.0587785252 1)
(0.9 0.0309016994 1)
(1 0 1)
)
);
The sub-dictionary above is just a list of points on the curve . The interpolation method is polyLine (straight lines between interpolation points). An alternative interpolation method could be spline.

The following mesh is produced:

Hopefully this simple example will help some people looking to incorporate curved edges into their blockMeshing!
Cheers.
This offering is not approved or endorsed by OpenCFD Limited, producer and distributor of the OpenFOAM software via http://www.openfoam.com, and owner of theOPENFOAM® andOpenCFD® trademarks.
Experimentally visualizing high-speed flow was a serious challenge for decades. Before the advent of modern laser diagnostics and velocimetry, the only real techniques for visualizing high speed flow fields were the optical techniques of Schlieren and Shadowgraph.
Today, Schlieren and Shadowgraph remain an extremely popular means to visualize high-speed flows. In particular, Schlieren and Shadowgraph allow us to visualize complex flow phenomena such as shockwaves, expansion waves, slip lines, and shear layers very effectively.
In CFD there are many reasons to recreate these types of images. First, they look awesome. Second, if you are doing a study comparing to experiments, occasionally the only full-field data you have could be experimental images in the form of Schlieren and Shadowgraph.
Without going into detail about Schlieren and Shadowgraph themselves, primarily you just need to understand that Schlieren and Shadowgraph represent visualizations of the first and second derivatives of the flow field refractive index (which is directly related to density).
In Schlieren, a knife-edge is used to selectively cut off light that has been refracted. As a result you get a visualization of the first derivative of the refractive index in the direction normal to the knife edge. So for example, if an experiment used a horizontal knife edge, you would see the vertical derivative of the refractive index, and hence the density.
For Shadowgraph, no knife edge is used, and the images are a visualization of the second derivative of the refractive index. Unlike the Schlieren images, shadowgraph has no direction and shows you the laplacian of the refractive index field (or density field).
In this post, I’ll use a simple case I did previously (https://curiosityfluids.com/2016/03/28/mach-1-5-flow-over-23-degree-wedge-rhocentralfoam/) as an example and produce some synthetic Schlieren and Shadowgraph images using the data.
Well as you might expect, from the introduction, we simply do this by visualizing the gradients of the density field.
In ParaView the necessary tool for this is:
Gradient of Unstructured DataSet:

Once you’ve selected this, we then need to set the properties so that we are going to operate on the density field:

To do this, simply set the “Scalar Array” to the density field (rho), and change the name of the result Array name to SyntheticSchlieren. Now you should see something like this:

There are a few problems with the above image (1) Schlieren images are directional and this is a magnitude (2) Schlieren and Shadowgraph images are black and white. So if you really want your Schlieren images to look like the real thing, you should change to black and white. ALTHOUGH, Cold and Hot, Black-Body radiation, and Rainbow Desatured all look pretty amazing.
To fix these, you should only visualize one component of the Synthetic Schlieren array at a time, and you should visualize using the X-ray color preset:

The results look pretty realistic:


The process of computing the shadowgraph field is very similar. However, recall that shadowgraph visualizes the Laplacian of the density field. BUT THERE IS NO LAPLACIAN CALCULATOR IN PARAVIEW!?! Haha no big deal. Just remember the basic vector calculus identity:
Therefore, in order for us to get the Shadowgraph image, we just need to take the Divergence of the Synthetic Schlieren vector field!
To do this, we just have to use the Gradient of Unstructured DataSet tool again:

This time, Deselect “Compute Gradient” and the select “Compute Divergence” and change the Divergence array name to Shadowgraph.
Visualized in black and white, we get a very realistic looking synthetic Shadowgraph image:

Now this is an important question, but a simple one to answer. And the answer is…. not much. Physically, we know exactly what these mean, these are: Schlieren is the gradient of the density field in one direction and Shadowgraph is the laplacian of the density field. But what you need to remember is that both Schlieren and Shadowgraph are qualitative images. The position of the knife edge, brightness of the light etc. all affect how a real experimental Schlieren or Shadowgraph image will look.
This means, very often, in order to get the synthetic Schlieren to closely match an experiment, you will likely have to change the scale of your synthetic images. In the end though, you can end up with extremely realistic and accurate synthetic Schlieren images.
Hopefully this post will be helpful to some of you out there. Cheers!
Sutherland’s equation is a useful model for the temperature dependence of the viscosity of gases. I give a few details about it in this post: https://curiosityfluids.com/2019/02/15/sutherlands-law/
The law given by:
It is also often simplified (as it is in OpenFOAM) to:
In order to use these equations, obviously, you need to know the coefficients. Here, I’m going to show you how you can simply create your own Sutherland coefficients using least-squares fitting Python 3.
So why would you do this? Basically, there are two main reasons for this. First, if you are not using air, the Sutherland coefficients can be hard to find. If you happen to find them, they can be hard to reference, and you may not know how accurate they are. So creating your own Sutherland coefficients makes a ton of sense from an academic point of view. In your thesis or paper, you can say that you created them yourself, and not only that you can give an exact number for the error in the temperature range you are investigating.
So let’s say we are looking for a viscosity model of Nitrogen N2 – and we can’t find the coefficients anywhere – or for the second reason above, you’ve decided its best to create your own.
By far the simplest way to achieve this is using Python and the Scipy.optimize package.
Step 1: Get Data
The first step is to find some well known, and easily cited, source for viscosity data. I usually use the NIST webbook (
https://webbook.nist.gov/), but occasionally the temperatures there aren’t high enough. So you could also pull the data out of a publication somewhere. Here I’ll use the following data from NIST:
| Temparature (K) | Viscosity (Pa.s) |
| 200 |
0.000012924 |
| 400 | 0.000022217 |
| 600 | 0.000029602 |
| 800 | 0.000035932 |
| 1000 | 0.000041597 |
| 1200 | 0.000046812 |
| 1400 | 0.000051704 |
| 1600 | 0.000056357 |
| 1800 | 0.000060829 |
| 2000 | 0.000065162 |
This data is the dynamics viscosity of nitrogen N2 pulled from the NIST database for 0.101 MPa. (Note that in these ranges viscosity should be only temperature dependent).
Step 2: Use python to fit the data
If you are unfamiliar with Python, this may seem a little foreign to you, but python is extremely simple.
First, we need to load the necessary packages (here, we’ll load numpy, scipy.optimize, and matplotlib):
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
Now we define the sutherland function:
def sutherland(T, As, Ts):
return As*T**(3/2)/(Ts+T)
Next we input the data:
T=[200,
400,
600,
800,
1000,
1200,
1400,
1600,
1800,
2000]
mu=[0.000012924,
0.000022217,
0.000029602,
0.000035932,
0.000041597,
0.000046812,
0.000051704,
0.000056357,
0.000060829,
0.000065162]
Then we fit the data using the curve_fit function from scipy.optimize. This function uses a least squares minimization to solve for the unknown coefficients. The output variable popt is an array that contains our desired variables As and Ts.
popt = curve_fit(sutherland, T, mu)
As=popt[0]
Ts=popt[1]
Now we can just output our data to the screen and plot the results if we so wish:
print('As = '+str(popt[0])+'\n')
print('Ts = '+str(popt[1])+'\n')
xplot=np.linspace(200,2000,100)
yplot=sutherland(xplot,As,Ts)
plt.plot(T,mu,'ok',xplot,yplot,'-r')
plt.xlabel('Temperature (K)')
plt.ylabel('Dynamic Viscosity (Pa.s)')
plt.legend(['NIST Data', 'Sutherland'])
plt.show()
Overall the entire code looks like this:
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
def sutherland(T, As, Ts):
return As*T**(3/2)/(Ts+T)
T=[200, 400, 600,
800,
1000,
1200,
1400,
1600,
1800,
2000]
mu=[0.000012924,
0.000022217,
0.000029602,
0.000035932,
0.000041597,
0.000046812,
0.000051704,
0.000056357,
0.000060829,
0.000065162]
popt, pcov = curve_fit(sutherland, T, mu)
As=popt[0]
Ts=popt[1]
print('As = '+str(popt[0])+'\n')
print('Ts = '+str(popt[1])+'\n')
xplot=np.linspace(200,2000,100)
yplot=sutherland(xplot,As,Ts)
plt.plot(T,mu,'ok',xplot,yplot,'-r')
plt.xlabel('Temperature (K)')
plt.ylabel('Dynamic Viscosity (Pa.s)')
plt.legend(['NIST Data', 'Sutherland'])
plt.show()
And the results for nitrogen gas in this range are As=1.55902E-6, and Ts=168.766 K. Now we have our own coefficients that we can quantify the error on and use in our academic research! Wahoo!

In this post, we looked at how we can simply use a database of viscosity-temperature data and use the python package scipy to solve for our unknown Sutherland viscosity coefficients. This NIST database was used to grab some data, and the data was then loaded into Python and curve-fit using scipy.optimize curve_fit function.
This task could also easily be accomplished using the Matlab curve-fitting toolbox, or perhaps in excel. However, I have not had good success using the excel solver to solve for unknown coefficients.
The most common complaint I hear, and the most common problem I observe with OpenFOAM is its supposed “steep learning curve”. I would argue however, that for those who want to practice CFD effectively, the learning curve is equally as steep as any other software.
There is a distinction that should be made between “user friendliness” and the learning curve required to do good CFD.
While I concede that other commercial programs have better basic user friendliness (a nice graphical interface, drop down menus, point and click options etc), it is equally as likely (if not more likely) that you will get bad results in those programs as with OpenFOAM. In fact, to some extent, the high user friendliness of commercial software can encourage a level of ignorance that can be dangerous. Additionally, once you are comfortable operating in the OpenFOAM world, the possibilities become endless and things like code modification, and bash and python scripting can make OpenFOAM worklows EXTREMELY efficient and powerful.
Anyway, here are a few tips to more easily tackle the OpenFOAM learning curve:
(1) Understand CFD
This may seem obvious… but its not to some. Troubleshooting bad simulation results or unstable simulations that crash is impossible if you don’t have at least a basic understanding of what is happening under the hood. My favorite books on CFD are:
(a) The Finite Volume Method in Computational Fluid Dynamics: An Advanced Introduction with OpenFOAM® and Matlab by
F. Moukalled, L. Mangani, and M. Darwish
(b) An introduction to computational fluid dynamics – the finite volume method – by H K Versteeg and W Malalasekera
(c) Computational fluid dynamics – the basics with applications – By John D. Anderson
(2) Understand fluid dynamics
Again, this may seem obvious and not very insightful. But if you are going to assess the quality of your results, and understand and appreciate the limitations of the various assumptions you are making – you need to understand fluid dynamics. In particular, you should familiarize yourself with the fundamentals of turbulence, and turbulence modeling.
(3) Avoid building cases from scratch
Whenever I start a new case, I find the tutorial case that most closely matches what I am trying to accomplish. This greatly speeds things up. It will take you a super long time to set up any case from scratch – and you’ll probably make a bunch of mistakes, forget key variable entries etc. The OpenFOAM developers have done a lot of work setting up the tutorial cases for you, so use them!
As you continue to work in OpenFOAM on different projects, you should be compiling a library of your own templates based on previous work.
(4) Using Ubuntu makes things much easier
This is strictly my opinion. But I have found this to be true. Yes its true that Ubuntu has its own learning curve, but I have found that OpenFOAM works seamlessly in the Ubuntu or any Ubuntu-like linux environment. OpenFOAM now has Windows flavors using docker and the like- but I can’t really speak to how well they work – mostly because I’ve never bothered. Once you unlock the power of Linux – the only reason to use Windows is for Microsoft Office (I guess unless you’re a gamer – and even then more and more games are now on Linux). Not only that- but the VAST majority of forums and troubleshooting associated with OpenFOAM you’ll find on the internet are from Ubuntu users.
I much prefer to use Ubuntu with a virtual Windows environment inside it. My current office setup is my primary desktop running Ubuntu – plus a windows VirtualBox, plus a laptop running windows that I use for traditional windows type stuff. Dual booting is another option, but seamlessly moving between the environments is easier.
(5) If you’re struggling, simplify
Unless you know exactly what you are doing, you probably shouldn’t dive into the most complicated version of whatever you are trying to solve/study. It is best to start simple, and layer the complexity on top. This way, when something goes wrong, it is much easier to figure out where the problem is coming from.
(6) Familiarize yourself with the cfd-online forum
If you are having trouble, the cfd-online forum is super helpful. Most likely, someone else is has had the same problem you have. If not, the people there are extremely helpful and overall the forum is an extremely positive environment for working out the kinks with your simulations.
(7) The results from checkMesh matter
If you run checkMesh and your mesh fails – fix your mesh. This is important. Especially if you are not planning on familiarizing yourself with the available numerical schemes in OpenFOAM, you should at least have a beautiful mesh. In particular, if your mesh is highly non-orthogonal, you will have serious problems. If you insist on using a bad mesh, you will probably need to manipulate the numerical schemes. A great source for how schemes should be manipulated based on mesh non-orthogonality is:
http://www.wolfdynamics.com/wiki/OFtipsandtricks.pdf
(8) CFL Number Matters
If you are running a transient case, the Courant-Freidrechs-Lewis (CFL) number matters… a lot. Not just for accuracy (if you are trying to capture a transient event) but for stability. If your time-step is too large you are going to have problems. There is a solid mathematical basis for this stability criteria for advection-diffusion problems. Additionally the Navier-Stokes equations are very non-linear and the complexity of the problem and the quality of your grid etc can make the simulation even less stable. When I have a transient simulation crash, if I know my mesh is OK, I decrease the timestep by a factor of 2. More often than not, this solves the problem.
For large time stepping, you can add outer loops to solvers based on the pimple algorithm, but you may end up losing important transient information. Excellent explanation of how to do this is given in the book by T. Holzmann:
https://holzmann-cfd.de/publications/mathematics-numerics-derivations-and-openfoam
For the record, this points falls into point (1) of Understanding CFD.
(9) Work through the OpenFOAM Wiki “3 Week” Series
If you are starting OpenFOAM for the first time, it is worth it to work through an organized program of learning. One such example (and there are others) is the “3 Weeks Series” on the OpenFOAM wiki:
https://wiki.openfoam.com/%223_weeks%22_series
If you are a graduate student, and have no job to do other than learn OpenFOAM, it will not take 3 weeks. This touches on all the necessary points you need to get started.
(10) OpenFOAM is not a second-tier software – it is top tier
I know some people who have started out with the attitude from the get-go that they should be using a different software. They think somehow Open-Source means that it is not good. This is a pretty silly attitude. Many top researchers around the world are now using OpenFOAM or some other open source package. The number of OpenFOAM citations has grown every year consistently (
https://www.linkedin.com/feed/update/urn:li:groupPost:1920608-6518408864084299776/?commentUrn=urn%3Ali%3Acomment%3A%28groupPost%3A1920608-6518408864084299776%2C6518932944235610112%29&replyUrn=urn%3Ali%3Acomment%3A%28groupPost%3A1920608-6518408864084299776%2C6518956058403172352%29).
In my opinion, the only place where mainstream commercial CFD packages will persist is in industry labs where cost is no concern, and changing software is more trouble than its worth. OpenFOAM has been widely benchmarked, and widely validated from fundamental flows to hypersonics (see any of my 17 publications using it for this). If your results aren’t good, you are probably doing something wrong. If you have the attitude that you would rather be using something else, and are bitter that your supervisor wants you to use OpenFOAM, when something goes wrong you will immediately think there is something wrong with the program… which is silly – and you may quit.
(11) Meshing… Ugh Meshing
For the record, meshing is an art in any software. But meshing is the only area where I will concede any limitation in OpenFOAM. HOWEVER, as I have outlined in my previous post (https://curiosityfluids.com/2019/02/14/high-level-overview-of-meshing-for-openfoam/) most things can be accomplished in OpenFOAM, and there are enough third party meshing programs out there that you should have no problem.
Basically, if you are starting out in CFD or OpenFOAM, you need to put in time. If you are expecting to be able to just sit down and produce magnificent results, you will be disappointed. You might quit. And frankly, thats a pretty stupid attitude. However, if you accept that CFD and fluid dynamics in general are massive fields under constant development, and are willing to get up to speed, there are few limits to what you can accomplish.
Please take the time! If you want to do CFD, learning OpenFOAM is worth it. Seriously worth it.
This offering is notapproved or endorsed by OpenCFD Limited, producer and distributorof the OpenFOAM software via http://www.openfoam.com, and owner of theOPENFOAM® andOpenCFD® trade marks.

Here I will present something I’ve been experimenting with regarding a simplified workflow for meshing airfoils in OpenFOAM. If you’re like me, (who knows if you are) I simulate a lot of airfoils. Partly because of my involvement in various UAV projects, partly through consulting projects, and also for testing and benchmarking OpenFOAM.
Because there is so much data out there on airfoils, they are a good way to test your setups and benchmark solver accuracy. But going from an airfoil .dat coordinate file to a mesh can be a bit of pain. Especially if you are starting from scratch.
The two main ways that I have meshed airfoils to date has been:
(a) Mesh it in a C or O grid in blockMesh (I have a few templates kicking around for this
(b) Generate a “ribbon” geometry and mesh it with cfMesh
(c) Or back in the day when I was a PhD student I could use Pointwise – oh how I miss it.
But getting the mesh to look good was always sort of tedious. So I attempted to come up with a python script that takes the airfoil data file, minimal inputs and outputs a blockMeshDict file that you just have to run.
The goals were as follows:
(a) Create a C-Grid domain
(b) be able to specify boundary layer growth rate
(c) be able to set the first layer wall thickness
(e) be mostly automatic (few user inputs)
(f) have good mesh quality – pass all checkMesh tests
(g) Quality is consistent – meaning when I make the mesh finer, the quality stays the same or gets better
(h) be able to do both closed and open trailing edges
(i) be able to handle most airfoils (up to high cambers)
(j) automatically handle hinge and flap deflections
In Rev 1 of this script, I believe I have accomplished (a) thru (g). Presently, it can only hand airfoils with closed trailing edge. Hinge and flap deflections are not possible, and highly cambered airfoils do not give very satisfactory results.
There are existing tools and scripts for automatically meshing airfoils, but I found personally that I wasn’t happy with the results. I also thought this would be a good opportunity to illustrate one of the ways python can be used to interface with OpenFOAM. So please view this as both a potentially useful script, but also something you can dissect to learn how to use python with OpenFOAM. This first version of the script leaves a lot open for improvement, so some may take it and be able to tailor it to their needs!
Hopefully, this is useful to some of you out there!
You can download the script here:
https://github.com/curiosityFluids/curiosityFluidsAirfoilMesher
Here you will also find a template based on the airfoil2D OpenFOAM tutorial.
(1) Copy curiosityFluidsAirfoilMesher.py to the root directory of your simulation case.
(2) Copy your airfoil coordinates in Selig .dat format into the same folder location.
(3) Modify curiosityFluidsAirfoilMesher.py to your desired values. Specifically, make sure that the string variable airfoilFile is referring to the right .dat file
(4) In the terminal run: python3 curiosityFluidsAirfoilMesher.py
(5) If no errors – run blockMesh
PS
You need to run this with python 3, and you need to have numpy installed
The inputs for the script are very simple:
ChordLength: This is simply the airfoil chord length if not equal to 1. The airfoil dat file should have a chordlength of 1. This variable allows you to scale the domain to a different size.
airfoilfile: This is a string with the name of the airfoil dat file. It should be in the same folder as the python script, and both should be in the root folder of your simulation directory. The script writes a blockMeshDict to the system folder.
DomainHeight: This is the height of the domain in multiples of chords.
WakeLength: Length of the wake domain in multiples of chords
firstLayerHeight: This is the height of the first layer. To estimate the requirement for this size, you can use the curiosityFluids y+ calculator
growthRate: Boundary layer growth rate
MaxCellSize: This is the max cell size along the centerline from the leading edge of the airfoil. Some cells will be larger than this depending on the gradings used.
The following inputs are used to improve the quality of the mesh. I have had pretty good results messing around with these to get checkMesh compliant grids.
BLHeight: This is the height of the boundary layer block off of the surfaces of the airfoil
LeadingEdgeGrading: Grading from the 1/4 chord position to the leading edge
TrailingEdgeGrading: Grading from the 1/4 chord position to the trailing edge
inletGradingFactor: This is a grading factor that modifies the the grading along the inlet as a multiple of the leading edge grading and can help improve mesh uniformity
trailingBlockAngle: This is an angle in degrees that expresses the angles of the trailing edge blocks. This can reduce the aspect ratio of the boundary cells at the top and bottom of the domain, but can make other mesh parameters worse.
Inputs:

With the above inputs, the grid looks like this:


Mesh Quality:

These are some pretty good mesh statistics. We can also view them in paraView:




The clark-y has some camber, so I thought it would be a logical next test to the previous symmetric one. The inputs I used are basically the same as the previous airfoil:

With these inputs, the result looks like this:


Mesh Quality:

Visualizing the mesh quality:




Here is an example of a flying with airfoil (tested since the trailing edge is tilted upwards).
Inputs:

Again, these are basically the same as the others. I have found that with these settings, I get pretty consistently good results. When you change the MaxCellSize, firstLayerHeight, and Grading some modification may be required. However, if you just half the maxCell, and half the firstLayerHeight, you “should” get a similar grid quality just much finer.


Grid Quality:

Visualizing the grid quality




Hopefully some of you find this tool useful! I plan to release a Rev 2 soon that will have the ability to handle highly cambered airfoils, and open trailing edges, as well as control surface hinges etc.
The long term goal will be an automatic mesher with an H-grid in the spanwise direction so that the readers of my blog can easily create semi-span wing models extremely quickly!
Comments and bug reporting encouraged!
DISCLAIMER: This script is intended as an educational and productivity tool and starting point. You may use and modify how you wish. But I make no guarantee of its accuracy, reliability, or suitability for any use. This offering is not approved or endorsed by OpenCFD Limited, producer and distributor of the OpenFOAM software via http://www.openfoam.com, and owner of the OPENFOAM® and OpenCFD® trademarks.
Here is a useful little tool for calculating the properties across a normal shock.
If you found this useful, and have the need for more, visit www.stfsol.com. One of STF Solutions specialties is providing our clients with custom software developed for their needs. Ranging from custom CFD codes to simpler targeted codes, scripts, macros and GUIs for a wide range of specific engineering purposes such as pipe sizing, pressure loss calculations, heat transfer calculations, 1D flow transients, optimization and more. Visit STF Solutions at www.stfsol.com for more information!
Disclaimer: This calculator is for educational purposes and is free to use. STF Solutions and curiosityFluids makes no guarantee of the accuracy of the results, or suitability, or outcome for any given purpose.