Ajay Harish | Blog | SimScale Engineering simulation in your browser Tue, 12 Dec 2023 15:43:24 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.2 https://www.simscale.com/wp-content/uploads/2022/12/cropped-favicon-32x32.png Ajay Harish | Blog | SimScale 32 32 The Challenger Disaster: Deadly Engineering Mistakes https://www.simscale.com/blog/space-shuttle-challenger-disaster/ Mon, 28 Jan 2019 12:40:12 +0000 https://www.simscale.com/?p=17137 In this article, we revisit the Space Shuttle Challenger disaster to understand the reasons behind the tragedy and how...

The post The Challenger Disaster: Deadly Engineering Mistakes appeared first on SimScale.

]]>
In 2016, I published one of my very first articles titled “How to Choose a Hyperelastic Material Model for FEA”, and almost 2.5 years later, I’ve comes back to address the persisting issue of hyperplastic materials/polymers in a different light. Today, let’s see how this is related to the Space Shuttle Challenger (OV-99). Rubber sealing is used across all industries, in most applications and almost taken for granted today. Here is an interesting simulation from the SimScale Public Projects Library: rubber seal sliding.

If only this was considered three decades ago, it could have saved the Space Shuttle Challenger from disaster. There are numerous articles addressing the Challenger disaster in the media, from Wikipedia pages to coverage on the National Geographic. This page on NASA website lists some of the major theories and reports on the tragedy.

One of the cruelest aspects of the catastrophe was the deadly fate of the crew who were on-board the Space Shuttle Challenger. It was originally believed that the shuttle exploded, and the crew died instantly. Later, it was found that the astronauts were alive, trapped in their seats and even conscious until the crew cabin hit the Atlantic Ocean at 321 kilometers per hour…

Could the Challenger disaster have been prevented if the engineering was executed better?


Sign up and check out our SimScale blog for much more!


What Was the Challenger Disaster?

The Space Shuttle Challenger, with a seven-member crew was launched on the morning of 28th January 1986 from Cape Canaveral, Florida (USA). The original launch for the 27th was postponed. The temperature on that day was about -7 °C. This was the 10th flight of the Space Shuttle Challenger.

73 seconds into the flight, it was believed that the solid boosters exploded killing all the crew on-board and plunging the shuttle into the Atlantic Ocean. Initial investigations reported that the O-ring between the solid boosters failed due to the low temperatures on that day, eventually leading to the breakage of the shuttle.

However, over the last 30 years, this has been a major case study for engineers and academics alike who have questioned the theories rigorously. Today, our understanding of the matter has greatly developed and matured as technological advancements revealed the true causation of the Challenger disaster.

What Actually Happened to the Space Shuttle Challenger?

As I discussed in a previous article on hyperelasticity, rubber materials demonstrate a glass transition behavior. This means that at temperatures that surpass this glass transition temperature, materials are extremely rubbery. In contrast, temperatures below this cause materials to behave in a glassy and brittle manner.

Structure of the solid rocket booster on space shuttle Challenger
Fig. 01: Structure of the solid rocket booster on space shuttle (https://commons.wikimedia.org/wiki/File:Space_Shuttle_SRB_diagram.png)

These O-rings were installed between the solid fuel segments as shown in Fig. 01. Their purpose was to prevent hot combustion gases and particles from escaping the inside of the booster. For redundancy, two O-rings were installed. On the internal layer, a heat-resistant putty was added to further isolate the rings from the hot gases.

 simplified cross-section of the field joints between the segments of the pre-Challenger Space Shuttle Solid Rocket Boosters
Fig. 02: A diagram showing a simplified cross-section of the field joints between the segments of the pre-Challenger Space Shuttle Solid Rocket Boosters. A – steel wall thickness 12.7 mm; B – base O-ring gasket; C – backup O-ring gasket; D – Strengthening-Cover band; E & F – insulation; F – insulation; G – carpeting; H – sealing paste; I – fixed propellant (Source: https://commons.wikimedia.org/wiki/File:Z%C5%82%C4%85cze_mi%C4%99dzysegmentowe_rakiety_SRB.svg)

Three Possible Issues that Caused the Challenger Disaster

  • Consider the cross-sectional view in Fig. 02. The booster ignition caused the heat-resistant putty to displace and increase air pressure between the putty and O-ring. This caused the gap between the ring (A) and insulation (E) to increase.
  • Due to exposure to hot gases, the O-rings underwent erosion.
  • On the shuttle Flight 51-C on 24th January 1985, blow-by was observed. This meant that the hot gases had penetrated both O-rings completely. This launch was done at the lowest ambient temperature.

Further tests in March 1985 demonstrated that the O-ring resiliency had issues when used below 10°C. On the 31st July 1985, a memo circulated that discussed a definite fear of losing a flight due to these conditions.

On the days before the launch, engineers continuously raised the issue about launching in cold conditions. Unfortunately, the issue eventually died down despite many engineers feeling that their concerns were not addressed. Further on, as commonly believed, there was no pressure to launch despite the delays. At this juncture, it is interesting to understand the flow of decision making. The two most notable decision makers included:

  1. Engineers: Morton-Thiokol Inc. (MTI, company manufacturing O-ring), Roger Boisjoly (O-ring specialist), Arnold Thompson (Engineer), Allen McDonald (Project supervisor of solid fuel rocket), Jerry Mason (Senior VP & GM), Joe Kilminster (VP of Space Booster), Robert Lund (VP, Engineering)
  2. NASA: Larry Mulloy (Manager of solid rocket booster), George Hardy (NASA Deputy Director)

Download this case study to learn how a ducting system design was optimized with CFD simulation 100% via web browser.


The Eve of the Launch

On the eve of the initial launch date, the MTI engineers and management recommended to delay the launch because the temperature was too low (less than 10°C). It later came out that Kilminster in particular was opposed to the launch, while Mulloy wanted to press on. As per NASA’s regulations, it was the responsibility of the contractor to demonstrate launch readiness of the components. Any inconclusive data automatically resulted in a no-go. However, Larry Mulloy put the burden on MTI to prove that the system was not ready.

Amid all the politics, Robert Lund—who was initially reluctant—agreed to the launch along with Jerry Mason. While Allen McDonald argued against the launch, Joe Kilminster declared to NASA that the data was inconclusive & hence a launch is not recommended. At this point, NASA managers Larry Mulloy & George Hardy, inform MTI that they can only make recommendations and they wanted to go ahead with the launch. They informed the higher-ups about going ahead, ignored the concerns raised, and unfortunately made history. We all know what happened next.

The Space Shuttle Challenger Launch

Going ahead, the shuttle was launched. The temperature, as predicted, was low enough for the rings to be extremely stiff and not provide sufficient sealing. As shown in Fig. 03, plumes of smoke were immediately visible.

Black plumes observed at the launch of the Space Shuttle Challenger
Fig 03: Black plumes observed at the launch of the Space Shuttle Challenger (Source: https://commons.wikimedia.org/wiki/File:STS-51-L_grey_smoke_on_SRB.jpg)

Due to the pressure of the launch, the cylindrical containers/casing bent away from each other creating an opening. The O-rings were expected to shift to seal the gap. However, due to low temperatures below the glass transition temperature, the O-rings behaved in a glassy and brittle manner. Above these temperatures, they demonstrated extreme flexibility and elasticity. Thus, it took much longer for these O-rings to shift out of place and create a seal. Both O-rings were vaporized across a 70° arc allowing gases to leak through a growing hole. That day, the shuttle experienced more than normal wind shears, leading to rapidly increasing hole.

At 73 seconds past launch, the disintegrating external tank caused the shuttle to veer from its altitude. This increased the aerodynamic forces to more than 20g (beyond its design limit of 5g) resulting in the breakup of the shuttle. The SRB’s continues to propel in an uncontrolled fashion. The intact cabin then flew into the Atlantic ocean with the crew likely conscious almost until impact.

Conclusion

The Challenger disaster was broadcasted worldwide and played time and again. To this day, people feel that they personally witnessed the disaster, and were somehow connected. A major factor was failure to effectively test polymeric material behavior across a range of temperatures. This, along with several other contributing engineering faults, eventually led to the tragedy. The Challenger disaster is an infamous example of how even the simplest engineering concepts must be respected, tried, and tested or misfortune can strike- and sometimes be fatal.

Richard Feynman, who was part of the Rogers Commission to investigate the causation of the disaster, explaining his discovery that the rubber used to seal the solid rocket booster joints using O-rings failed to expand when the temperature was at or below 32 degrees F (0 degrees C).

The post The Challenger Disaster: Deadly Engineering Mistakes appeared first on SimScale.

]]>
Applications of FEA in Civil Engineering https://www.simscale.com/blog/fea-applications-civil-engineering/ Mon, 21 Jan 2019 09:55:52 +0000 https://www.simscale.com/?p=14942 Have you ever wondered what applications finite element analysis (FEA) is best used for in civil engineering? Read our blog to...

The post Applications of FEA in Civil Engineering appeared first on SimScale.

]]>
Finite element analysis (FEA) is an extremely useful tool in the field of civil engineering for numerically approximating physical structures that are too complex for regular analytical solutions. Consider a concrete beam with support at both ends, facing a concentrated load on its center span. The deflection at the center span can be determined mathematically in a relatively simple way, as the initial and boundary conditions are finite and in control. However, once you transport the same beam into a practical application, such as within a bridge, the forces at play become much more difficult to analyze with simple mathematics.

Within the fields of structural and civil engineering, there are several such problems where FEA can be used to simplify a structure and understand its overall behavior.

elastomeric bearing pad bridge design, FEA in civil engineering
Finite element analysis of a bridge’s elastomeric bearing pad with SimScale

As the field of computer-aided engineering (CAE) has advanced, so have FEA tools, with tremendous benefit to the civil engineering sector. The use of advanced FEA tools has not only led to more innovative and efficient products but also furthered the development of accurate design methods.

FEA in Structural Engineering

Whether you are building a simple residential building or the next Burj Khalifa, understanding the structural behavior and integrity of your building is extremely important to ensure the safety of its occupants.

Structural analysis involves determining the behavior of a structure when it is subjected to loads, such as those resulting from gravity, wind, or even in extreme cases natural disasters (e.g., earthquakes). Using basic concepts of applied mathematics, any built structure can be analyzed—buildings, bridges, dams or even foundations.

For example, in the right conditions, a structure such as the Burj Khalifa would oscillate by up to 3m at its highest point. Imagine living on the top floor and being subjected to this kind of motion. For more reasons than one (nauseated inhabitants included), this kind of motion needs to be controlled, and most structures use a damper to reduce the motion. Taipei 101 has a famous tuned mass damper as shown in the video below.

However, in contrast to Taipei 101, rather than take valuable space with a damper, the team of architects and engineers responsible for the Burj Khalifa instead specifically designed its shape to “confuse” winds, and therefore reduce oscillation from wind loads. To see this unique shape in action, you can check out the below simulation video created with SimScale, or if you’re interested to read further, check out this Quora answer: “How does Burj Khalifa Survive Wind Loads“.

On the other of the spectrum, earthquakes are a major concern in several highly populated parts of the world. When it comes to natural disasters, a large number of building codes are not up to standard and can often result in devastating casualties. For example, in the 1985 Mexico City earthquake, a significant portion of the damage occurred to buildings that had between 8 – 15 stories. Buildings that were taller or shorter fared much better. Why? The frequency of the earthquakes seismic waves happened to match the natural frequency of the mid-sized buildings, causing them to oscillate more violently, eventually leading them to collapse. The video below provides a concise explanation of the forces behind this movement and how it can be reduced.

While standard solutions like the moment distribution method, unit load method, or the strain energy formula, can be used to determine the behavior of simple structures (such as cantilevered beams, supported beams or trusses ). For non-conventional structures, we have to go deeper.

Originally, civil engineers used laboratory experiments to solve these design problems, especially in regards to the behavior of the steel structures when subjected to high wind loads and earthquakes. However, such reliance on laboratory testing was costly and not immediately accessible. Thus, structural codes were developed. These codes made it easier for engineers to define what sort of behavior was acceptable and safe for standard structures. However, with the recent advent of accessible CAE tools, designing, testing and guaranteeing the safety of an innovative building project and its materials has become easier, faster, and significantly cheaper.


The purpose of a helmet is to protect the person who wears it from a head injury during impact. In this project, the impact of a human skull with and without a helmet was simulated with nonlinear dynamic analysis. Download this case study for free.


Using FEA in Other Areas of Civil Engineering

Innovative Building Materials

Concrete has been a building material for a very long time. There are many phenomena like alkali-silica reactions where research continues. Why does concrete crack and how does the composition influence the crack growth? Can we create self-healing materials?

One of the biggest ifs in an FEA simulation is the accurate determination of material properties. This is where novel research areas like multiscale modeling come into the picture. Using a multiscale model, one is able to use the microstructure (or otherwise each individual component property) to determine the property of the concrete (or otherwise the property of the whole).

Sedimentation, Erosion & Hydrology

The motion of water produced in coastal areas is more predominant than inland waterways. Generation of high current waves, tides, ocean currents, storm surges, tsunamis, wind currents, etc. bring complications, and along with water particles cause subsequent damage and destruction of marine structures.

In the context of coastal flow problems, the boundary conditions of reflection and diffraction of wave-current complicate the civil engineers to analyze the same to get the solutions. Thus, coastal flow modeling (finite element modeling of fluid flows) and analysis based on numerical-empirical methodology is of today’s trend.

It is not just the coastal areas but also the catchment regions where hydrological models have been used over the last decades to understand the flow of water in porous soil and thus contributing to the groundwater levels.

While linear static analysis of hydrological projects like dams considers the load of the river on the dam, nonlinear analysis is needed to comprehensively consider the effect of the conveyance system including the inner cushion surface, outer surface of steel liner tuber to face water contact, and contact between concrete and steel liner, etc..


Download this free case study to learn how the SimScale platform was used to investigate a ducting system and optimize its performance.


SimScale Public Projects for Civil Engineering Applications

You can also look at several public projects on the SimScale database related to simulations for civil engineering.

  1. Static Analysis of an Elastomeric Bearing Pad: This project addresses static behavior of elastomeric bearing pads that are commonly used in bridge and other constructions for vibration isolation.
  2. Modal Analysis of Truss Bridge Design: If the bridge were to be considered as a truss, then modal analysis is essential to the design process.

Conclusion

While the overall applications of FEA in civil engineering are innumerable, the potential is only increasing. Even more, it is predicted to continue its growth and become a potential market of over USD 10B by just 2020. Engineers have realized that it is not the stronger structures that survive, but the smarter ones. In this long-term goal, CAE has become more appreciated for the value addition it brings to the table; especially today when cloud-based simulation tools like SimScale no longer make CAE just a tool for larger companies, but also for small to medium-sized enterprises.

The post Applications of FEA in Civil Engineering appeared first on SimScale.

]]>
Implicit vs Explicit Finite Element Method (FEM): What Is the Difference? https://www.simscale.com/blog/implicit-vs-explicit-fem/ Tue, 08 Jan 2019 09:36:26 +0000 https://www.simscale.com/?p=17126 Learn everything you need to know about using implicit and explicit analysis in FEM (finite element method) and the difference...

The post Implicit vs Explicit Finite Element Method (FEM): What Is the Difference? appeared first on SimScale.

]]>
What is the Finite Element Method (FEM)?

The finite element method (FEM) is a numerical problem-solving methodology commonly used across multiple engineering disciplines for numerous applications such as structural analysis, fluid flow, heat transfer, mass transport, and anything existing as a real-world force. This practice systematically yields equations and attempts to approximate the values of the unknowns. This method subdivides the overall problem into simpler sub-issues that are easier to solve. In turn, these sub-issues called finite elements require implicit vs explicit analysis.

Why is the Finite Element Method Necessary?

Implicit vs Explicit FEM is used to simulate naturally or artificially occurring phenomenons. This numerical technique is the foundation of simulation software in order for engineers including civil and mechanical engineers to assess their designs for tension, weak spots, etc., before prototyping or implementation stages.

stock image of watch used to demonstrate time dependent explicit fem

Implicit vs. Explicit FEA Time-dependent vs. Time-independent Analysis

For all nonlinear and non-static analyses, incremental load (also known as displacement steps) are needed. In more simplistic terminology, this means we need to break down the physics/time relationship to solve a mathematical problem. To do this, we form two groups: either time-dependent or time-independent problems. To solve these problems, we commonly use ‘implicit’ and/or ‘explicit’ methods.

We refer to problems as ‘time-dependent’ when the effects of acceleration are pronounced and cannot be neglected.  For example, in a drop test, the highest force occurs within the first few milliseconds as the item decelerates to a halt. In this case, the effect of such a deceleration must be accounted for.

In contrast, when loads are slowly applied onto a structure or surface (i.e., when a monitor is placed onto a table) the loading can be considered ‘quasi-static’ or ‘time-independent’. This is because the loading time is slow enough that the acceleration effects are negligible. For more time-dependent and time-independent examples, there are several projects in the SimScale Public Projects database. Some interesting examples are also depicted in Figure 01.

Various static and dynamic, implicit and explicit FEM simulations with SimScale
Figure 01: Static analysis of an aerobracket (top-left); grocéry basket (top-right); crank shaft (bottom-right); dynamic analysis of a chain link (bottom-left)

Implicit vs Explicit FEM Implicit vs. Explicit Problems

All of these implicit vs explicit problems are expressed through mathematical partial differential equations (PDEs). While today’s computers can’t single-handedly solve PDEs, they are equipped to solve matrix equations. These matrix equations can be linear or nonlinear. In most structural problems, the nonlinear equations fall into 3 categories:

  • Material Nonlinearity: Where deformations and strains are large (i.e., polymer materials)
  • Geometric Nonlinearity: Where strains are small, but rotations are large (i.e., thin structures)
  • Boundary Nonlinearity: Due to non-linearity of boundary conditions, (i.e., contact problems)

In linear problems, the PDEs reduce to a matrix equation as:

[K]{x} = {f}

and for non-linear static problems as:

[K(x)]{x} = {f}

For dynamic problems, the matrix equations come down to:

[M]{x´´} + [C]{x´} + [K]{x} = {f}

where (.‘) represents the derivative.

Implicit vs Explicit FEM Implicit FEM Analysis

One method of solving for the unknowns {x} is through matrix inversion (or equivalent processes). This is known as an implicit analysis. When the problem is nonlinear, the solution is obtained in a number of steps and the solution for the current step is based on the solution from the previous step. For large models, inverting the matrix is highly expensive and will require advanced iterative solvers (over standard direct solvers). Sometimes, this is also known as the backward Euler integration scheme. These solutions are unconditionally stable and facilitate larger time steps. Despite this advantage, the implicit methods can be extremely time-consuming when solving dynamic and nonlinear problems.

Implicit vs Explicit FEM Explicit FEM Analysis

Explicit analyses aim to solve for acceleration (or otherwise {x´´}). In most cases, the mass matrix is considered as “lumped” and thus a diagonal matrix. Inversion of a diagonal matrix is straightforward and includes inversion of the terms on the diagonal only. Once the accelerations are calculated at the nth step, the velocity at n+1/2 step and displacement at n+1 step are calculated accordingly. In these calculations, the scheme is not unconditionally stable and thus smaller time steps are required. To be more precise, the time step in an explicit finite element analysis must be less than the Courant time step (i.e., the time taken by a sound wave to travel across an element) while implicit analyses have no such limitations.

FEM Differences What is the Difference between Implicit and Explicit FEM?

Explicit FEM is used to calculate the state of a given system at a different time from the current time. In contrast, an implicit analysis finds a solution by solving an equation that includes both the current and later states of the given system. This method requires additional computation and can be harder to implement. However, it will be used in lieu of explicit methodologies when problems are still and using alternative analysis methods is impractical.

For more information, this Wikipedia page provides great examples with illustrations of how both methodologies give numerical approximations to solutions of time-dependent and PDE equations.

FEM Uses When to Use Explicit FEM?

Explicit analysis offers a faster solution in events where there is a dynamic equilibrium or otherwise:

Sum of all forces = mass x acceleration

The explicit method should be used when the strain rates/velocity is over 10 units/second or 10 m/s respectively. These events can be best exampled by extreme scenarios such as an automotive crash, ballistic event, or even meteor impact. In these cases, the material models do not only need to account for the variation of stress with strain but also the strain rate. On this scale, the strain rates play a particularly important contribution.

FEM Uses When to Use Implicit FEM?

The implicit method should be used when the events are much slower and the effects of strain rates are minimal. Once the growth of stress as a function of strain can be established, these can be analyzed using implicit methods. In this case, one can consider a static equilibrium such that:

Sum of all forces = 0

This covers many of the most common engineering problems.

FEM with SImScale Using Parallel Servers for Solutions

The decision to use implicit and explicit FEM directly impacts the speed and potential parallelization. Implicit systems involve matrix inversions that are extremely complicated and do not directly scale with the number of processors. There are several parallel solvers available.

During the solution process, these processors need to continuously communicate with each other. As the number of required processors increases, a point is reached where there is no further advantage of using implicit analysis because the processors stop being time-efficient. As an analogy to illustrate this point, if you delegate a task to 5 people it is much more efficient than if you delegate a task to 100 people in terms of communication and efficiency.

Alternatively, most often explicit problems use a lumped mass matrix that results in de-coupling of equations. Imagine having a diagonal matrix to solve for, where each equation is independent and can be sent to a separate processor. Such problems scale easily with processing power, and can be computed rapidly.

Explore FEA in SimScale

Implicit and Explicit FEM Conclusion

structural analysis software fea software with no hardware investment

Finite element analysis of an aircraft engine bearing bracket with SimScale

The most important thing to remember when choosing implicit or explicit FEM analysis is to not lose sight of the physics of the problem. Implicit vs. explicit FEM directly influence the physics observed during the simulation, and hence affects the accuracy of the solution process.

Set up your own cloud-native simulation via the web in minutes by creating an account on the SimScale platform. No installation, special hardware, or credit card is required.

The post Implicit vs Explicit Finite Element Method (FEM): What Is the Difference? appeared first on SimScale.

]]>
Battery Design: Solving Heating Issues with Heat Transfer Simulation https://www.simscale.com/blog/battery-design-heat-transfer/ Tue, 04 Sep 2018 14:54:33 +0000 https://www.simscale.com/?p=14991 Heat transfer simulation can help solve and prevent heating issues early in the battery design process. Learn how to get easy...

The post Battery Design: Solving Heating Issues with Heat Transfer Simulation appeared first on SimScale.

]]>
Battery technology is an integral part of our lives: from smartphones to massive electrochemical energy storage systems and from hybrid automobiles to fully electric airplanes, our dependence on batteries is ever increasing. This technology, however, is far from perfect, and optimizing battery design—particularly in terms of thermal management and heat transfer—is a key challenge for engineers and manufacturers today. 

While lithium-ion batteries are the best rechargeable batteries available today, they suffer from two major disadvantages: (1) they degrade, albeit slowly, and (2) they are quite sensitive to heat. In this article we will focus on the second aspect—more specifically, we will address the use of numerical simulations in understanding thermal management and heat transfer in battery technology. Though much of the following discussion concerns the battery packs used in electric vehicles, it is applicable to any technology that utilizes lithium-ion technology.

The performance and life of a battery is, among other things, affected by the battery design, the materials used, and the operating temperature. For battery packs used in electric or hybrid vehicles, the operating temperature (usually in the range of 20 °C – 35 °C) is critical to maximizing its efficiency. Operating at lower temperatures affects capacity, while higher temperatures deteriorate lifespan. Reports indicate that electric vehicles’ mileage could decline by as much as 60% when the ambient temperature drops below −6 °C and roughly 50% when operated at 45 °C. Another factor that affects the lifespan of battery packs is internal temperature distribution. A difference of more than roughly 5 °C in a cell / module (many of which can be inside a pack) reduces the overall lifespan as well as capacity. Fig. 01, shows the temperature distribution in a standard battery rack.

Fig. 01 shows the temperature distribution applicable to a standard battery rack, as shown in a simulation run using SimScale. the temperature is shown in Kelvin. Problem areas around the top and middle are clearly visible (Source: SimScale Public Projects)
Fig. 01: Temperature distribution in a standard battery rack. Temperature is shown in Kelvin. (Source: SimScale Public Projects)

As illustrated, temperatures can, in normal circumstances, range between 25 °C to 35 °C. Without question, the thermal behavior of batteries in realistic operating conditions has a strong influence on their utility across applications, hence, maintaining an efficient and accurate thermal management is of paramount importance.

Overview of the Simulation-Based Approach

Numerical simulations of thermal management systems have proven to be an excellent way to develop and improve battery design at a significantly lower cost than physical testing. A well-defined and designed simulation approach can help predict thermal physics inside a battery accurately, and therefore, can act as a useful tool during the early stages of the design process.

Many different simulation models have been used to evaluate the thermal performance of a battery cell—from simple lumped capacitance models on one end of the spectrum to full-blown 3D simulation models on the other. However, all of these models are constructed using the same basic pieces of the fundamental energy balance equation: (a) What are the sources of heat generation? (b) What are the geometric and thermal properties of the battery cells? And, finally, (c) What cooling mechanism is in place? Different models account for these components to varying degrees of fidelity to suit the desired accuracy and cost considerations.

Heat is generated from two sources:

  1. Electrochemical operation, which relates to the heat generated due to chemical reactions inside the battery.
  2. Joule heating, also known as Ohmic heating or the heat generated due to flow of electricity.

Both of these sources need to be considered through their own governing equations. Each one depends on the material properties, local temperature and, of course, the applicable geometry. It is however common practice to use experimentally validated model equations for both of these aspects in order to save significantly on some computations, as well as to simplify the simulation framework.

The geometry of the battery cells and the overall pack could also play a potentially important role in the heat transfer characteristics of the system. It is becoming increasingly common to use full 3D geometries (provided as CAD models) as inputs in the analysis rather than a relatively simplified 2D approximation. The material properties of the different components are obtained from the manufacturer’s data or from other experimental studies.

Lastly, convection is typically the main method for heat dissipation (radiation plays a minimal role, if at all) to the ambience. Conduction heat transfer within the battery may or may not be considered, depending on the desired fidelity of the simulations.


Learn the three basic heat transfer mechanisms in our Thermal Analysis Workshop. Watch our thermal simulation now!


Putting It All Together

Perhaps the simplest approach is the use of a lumped capacitance model. This is a transient conduction approach that assumes the temperature of a solid is spatially uniform and is a function of time only. Without going too far into the details, it is not hard to see that these approaches lack significant detail. Nevertheless, there are instances when these models, if carefully implemented, can present fairly accurate transient data at very low costs.

On the other hand, detailed thermal simulations (such as those provided by SimScale) can provide a more holistic overview of the thermodynamics involved, considering fluid flow and heat transfer within a battery module or pack. In doing so, making it possible to design better battery cooling systems. These simulations have the ability to use exact specifications of the material properties, geometric details, and initial and boundary conditions. If everything is set up effectively, highly accurate results can be expected. CFD techniques have been applied to thermal analysis with great success. Cloud-based simulation tools allow overall computational costs to be considerably reduced, while presenting detailed spatial and transient data. This can be invaluable in establishing a fundamentally sound understanding of the thermal physics involved.

Battery Design Simulation with CFD

An example of successful CFD battery simulation can be found in the work of Yi, Koo & Shin in their paper “Three-Dimensional Modeling of the Thermal Behavior of a Lithium-Ion Battery Module for Hybrid Electric Vehicle Applications” published in the Journal “Energies”. The Li-ion battery module was set up as shown in Fig. 02.

Fig. 02: shows the CFD setup for the LIB battery module simulated by J. Yi, B. Koo and C. B. Shin (Source: , “Three-Dimensional Modeling of the Thermal Behavior of a Lithium-Ion Battery Module for Hybrid Electric Vehicle Applications,” Energies, vol. 7, pp. 7586 – 7601 (2014))
Fig. 02: CFD setup for the LIB battery module (Source: J. Yi, B. Koo and C. B. Shin, “Three-Dimensional Modeling of the Thermal Behavior of a Lithium-Ion Battery Module for Hybrid Electric Vehicle Applications,” Energies, vol. 7, pp. 7586 – 7601 (2014)

The resulting temperature distribution within the module after 1620 seconds of discharge and heat transfer is as shown in Fig. 03.

This figure shows the temperature distribution of LIB Battery cells after 1620 seconds, as found in a computer simulation analysis by J. Yi, B. Koo and C. B. Shin. (Source: “Three-Dimensional Modeling of the Thermal Behavior of a Lithium-Ion Battery Module for Hybrid Electric Vehicle Applications,” Energies, vol. 7, pp. 7586 – 7601 (2014))
Fig. 03: Temperature distribution of LIB cells after 1620s (Source: J. Yi, B. Koo and C. B. Shin, “Three-Dimensional Modeling of the Thermal Behavior of a Lithium-Ion Battery Module for Hybrid Electric Vehicle Applications,” Energies, vol. 7, pp. 7586 – 7601 (2014)

Conclusions

The multiphysics nature of this problem means that in each of these approaches, simplifications have been made to several aspects. Therefore, there is always room for improvement. The list below shows just a selection of these challenging aspects:

  • More accurate modeling of the battery chemistry and charge/discharge cycles;
  • Batteries that consist of a wide range of materials, including thin layers of metals (encasing the cells), porous materials, etc;
  • If several layers of different materials are used within the battery design, internal material can be anisotropic in nature;
  • If the material properties of the battery design are generally not very well known, this can significantly affect simulation accuracy; and
  • Modeling cooling fluid flow is always a challenge due to complex geometry and possible fluid turbulence involved.

Increasing computational power has allowed researchers to account for more of these aspects accurately and efficiently. Improving our confidence in the predictive capability of such simulations. In spite of the remaining challenges, numerical simulations have contributed tremendously to the design of better thermal management systems for battery design and will continue to do so in the foreseeable future!

Check out all of our SimScale blogs here for more helpful articles!

The post Battery Design: Solving Heating Issues with Heat Transfer Simulation appeared first on SimScale.

]]>
Time Trial Bike vs Standard Bike in Tour de France https://www.simscale.com/blog/time-trial-bike-tour-de-france/ Fri, 20 Jul 2018 09:28:26 +0000 https://www.simscale.com/?p=15588 In this article, we discuss the differences between Tour de France standard bikes that are used during all the stages of the race...

The post Time Trial Bike vs Standard Bike in Tour de France appeared first on SimScale.

]]>
With a global viewership of over 3.5 billion, the Tour de France is the third-largest sports event in the world. Started in 1903, it is a multi-stage bike race consisting of 21 stages and is held over a 23-day period. While most of the race takes place in France, it also passes through several other countries. Before we get into a discussion about the bikes, it is necessary to understand a little about the event itself.

Glimpse into Tour de France

The modern Tour de France typically consists of about 20 teams, each with eight riders per team. The 23-day event, generally held during the month of July, consists of 21 stages, with one stage being completed per day. Over the years, the format of the race has remained the same and includes time trials, and passage through the mountains of the Pyrenees and Alps, before finally finishing on the Champs-Élysées in Paris.

The 2018 Tour de France recently started on July 7th and the last stage will be held on July 29th. A total of 176 riders from 22 teams are participating in this 3351 km journey. The stages include flat tracks, medium and high mountains, and two time-trial stages (one for each team on July 9th and one for individuals on July 28th).

In the team time trial, the winning team is determined by the times of the fourth-fastest rider. In order to increase the chances of winning, the gap between the first four riders is not only small but they also utilize the slipstream to increase their chances of winning.

Fig 1: Images shows six cyclists in a Belgian tourniquet (Source: https://en.wikipedia.org/wiki/File:BelgischerKreisel.gif)
Fig. 1: Six cyclists in a Belgian tourniquet (Source)

The individual time trials are individual events and the starting grid depends on the ranking of players or performance up to the previous stage. Unlike the team stage, the use of the slipstream between competitors is forbidden.

What Makes a Time Trial Bike Unique?

Beyond discussing the bikes used, it is important to understand the difference between the time trial and the bikes for the other stages. The stages of the general race include riding on high and low mountains, flat tracks, etc. The distance is short, usually flat, and can be typically covered at a rapid pace. The time trials in 2018 were 35.5 km (team) and 31 km (individual).

Unlike the other rounds, the cyclists in the time trial must maintain a steady performance over a longer period. This is considered to be one of the most difficult parts of multi-stage races like the Tour de France. The TT round requires the cyclists to be extremely aerodynamic while maintaining steady pedaling and power output over a constant duration. Many young cyclists tend to overexert themselves early on and slow down in the middle, before finally realizing that there was not enough consistent effort towards the end. Hence, these time trials are physically and mentally challenging at the same time.


The purpose of a helmet is to protect the person who wears it from a head injury during impact. In this project, the impact of a human skull with and without a helmet was simulated with nonlinear dynamic analysis. Download this case study for free.


The Time Trial Bike

These challenging stages thus require a bike that suits their unique requirements. As discussed earlier, the stage is generally flat, with minimal hills and fast-paced racing. The obvious choice for such a bike is something that has an extremely light chassis and wheels, is aerodynamic, and that offers a stable weight distribution. Since the TT event requires the cyclist to be extremely focused, this could also mean a slight reduction in handling for a gain in aerodynamic performance.

Check out an article called “Aerodynamics of Cycling Explained through CFD”, based on a video project with FYFD.

Fig 2: Velocity profiles in three different riding positions
Fig. 2: Velocity profiles in three different riding positions

The article discusses aerodynamic drag and shows how the rider could reduce drag by shifting their body lower rather than adopting an upright position. This is essential for time trial bikes where the goal is to increase the aerodynamics as much as possible.

One of the first differences is the position of the handlebars. As shown in Fig. 3, cyclists spend most of their time on the armrests with their hands in front. Today, with the electronic gear system, even switching gears can be done from this position. The base bar that includes the brakes is almost never used. The base bar is used only at the start of the race or if the bike becomes wobbly. This is a very important aspect that allows the cyclist to maintain a near-horizontal position that reduces drag.

Fig 3: Handle bar on a standard TT bike. time trial bike
Fig. 3: Handlebar on a standard time trial bike (TT)

However, such a position is not possible during a normal race. While it is better from the aerodynamics perspective, it is an unstable riding position. While one cannot brake suddenly, even a small shift can lead the bike to wobble and crash. Thus, these types of handlebars are not legally allowed on the rest of the tour, with the exception of the time trial stages.

The next important difference is the rear wheel, which is generally covered/closed while the cross-section of the front wheel is shaped like an airfoil, as shown in Fig. 4. This plays a big role in aerodynamics, especially on flat tracks where there is not much wind.

Fig 4: Wheels on a time trial bike
Fig. 4: Wheels on a time trial bike (TT)

An interesting case study on the disks in the rear wheels can be noted at wing-light.de. In this study, the authors consider several wheels and demonstrate the change in drag force. The study shows that at a yaw angle of 20 degrees, one can achieve a negative drag! Fascinating, right? Why not try this using the SimScale simulation platform?

Conclusion

There are several other minor differences between a time trial bike and the standard bike used for the rest of the tour. Some of the major ones include (a) a frame that integrates into the top tube and front wheel sections, (b) an arrow (or airfoil) shaped cross-section for the bikes, (c) a much larger gear pulley, and so on.

You can find more details on simulating this in the article “Bike Aerodynamics Simulation – Reducing Cyclist Drag by 30%”, which also includes a public project of the simulation of a bike’s aerodynamics. You can use this project as a template to start your own analysis. Enjoy simulating!

  

The post Time Trial Bike vs Standard Bike in Tour de France appeared first on SimScale.

]]>
Why the Tacoma Narrows Bridge Collapsed: An Engineering Analysis https://www.simscale.com/blog/tacoma-narrows-bridge-collapse/ Mon, 16 Jul 2018 08:05:22 +0000 https://www.simscale.com/?p=14967 This article provides an engineer's perspective on the Tacoma Narrows Bridge collapse. It explains how the bridge was designed...

The post Why the Tacoma Narrows Bridge Collapsed: An Engineering Analysis appeared first on SimScale.

]]>
The Tacoma Narrows Bridge is the historical name given to the twin suspension bridge—originally built in 1940—that spanned the Tacoma Narrows strait. It collapsed just four months later due to aeroelastic flutter. Since then, this topic has become popular, with several case studies discussing the failure phenomenon of suspension cable bridges.

In the state of Washington, the construction of the Tacoma Narrows Bridge was completed and opened to traffic on July 1st, 1940. It was the very first bridge to incorporate a series of plate girders as roadbed support, and the first bridge of its type (cable suspension). It was also the third largest suspension bridge of its time, with a 2800-foot central span and two side spans of 1100 feet each.

A west-side approach had a continuous steel girder of 450ft, while the east side had a long reinforced concrete frame of 210ft.  It had two cable anchorages of 26ft. along roadways, two 5ft. sidewalks, and two 8ft. deep stiffening girders. Among several other structural details, the suspension cable anchorages to which the cables were connected were made of 20,000 cubic yards of concrete, 6 lakh pounds of structural steel, and 2.7 lakh pounds of reinforcing steel. Because of its extremely long length, it was considered a ‘narrow bridge’. The overall construction cost was estimated to be a whopping $6 Million in 1940. Considering inflation, this is equivalent to almost $1 Billion, and all of this for something that lasted just four months and seven days. Yet, this remains a great engineering feature for civil engineers to ponder over.

Before The Tacoma Narrows Bridge Collapse
Figure 1: The Tacoma Narrows Bridge, opening day (Source: By University of Washington Libraries Digital Collections, via Wikimedia Commons)

The Incident: What Happened on That Fateful Day?

Shortly after the construction of the Tacoma bridge, it was found to dangerously buckle and sway along its length in windy conditions. Even with the normal winds, the bridge was undulating noticeably, and this had the engineers worried about the conditions in the presence of high winds. Alarmed by this, many engineers started conducting experiments in a wind tunnel on the structural behavior of the bridge when subjected to wind loads.

On the day of the Tacoma Narrows Bridge collapse, it experienced winds of about 19 m /s (i.e., about 70kmph). The center stay was torsionally vibrating at a frequency of 36 cpm (cycles/min) in nine different segments. Over the next hour, the torsional vibration amplitude built up, and the motion changed from rhythmically rising and falling to a two-wave twisting. Despite all these motions, the center part of the bridge (along the length) remained motionless, while its other two halves twisted in opposite directions.

The bridge was twisted noticeably into two parts, experiencing 14 vibrations/min. This drastic torsional motion was started by a failure of a cable (located along the north side) band connecting to the center of the diagonal ties. Due to alternative sagging and hogging of span members, the towers holding them were pulled towards them. Further, visible and predominant cracks developed before the entire bridge crashed down into the waters.

Thankfully, no human life was lost in the incident, but this was still an overwhelming engineering failure. Prof F.B Farquharson of the University of Washington was responsible for conducting experiments to understand the oscillations. On this day, the professor and his team recorded the movement of the bridge on camera, and we can find this today on YouTube.

Post-Investigation of the Tacoma Narrows Bridge Collapse

A three-dimensional scaled model of 1:200 scale was built for wind tunnel experiments and to explicitly understand the reason for failure. The experiments brought about a new theory: wind-induced oscillations. The image of the Tacoma Narrows Bridge collapse is shown in Fig. 03.

Tacoma Narrows Bridge Collapse
Figure 3: The Tacoma Narrows Bridge collapse (Image source: Wikipedia)

The shape of the bridge was aerodynamically unstable along the transverse direction. The vertical girders of the H-shape allowed flow separation, thus leading to vortex generation that matched the phase of oscillation. These vortices generated enough energy to push the girders out of their position.

The problem that caused the Tacoma Narrows Bridge collapse was not a new problem, but one which had been unspecified. Due to wind action, increased stiffness can be seen through various design methods such as adding a greater dead load, adopting dampers, stiffening trusses or by guy cables. However, these factors were not originally considered and only became part of the later forensics.

The Physics Behind the Tacoma Narrows Bridge Collapse

The Tacoma Narrows Bridge collapsed primarily due to the aeroelastic flutter. In ordinary bridge design, the wind is allowed to pass through the structure by incorporating trusses. In contrast, in the case of the Tacoma Narrows Bridge, it was forced to move above and below the structure, leading to flow separation. Such flow separation, in the presence of an object, can lead to the development of a Kármán vortex street, as the flow passes through the object.

Tacoma Narrows Bridge Collapse, Comparison between the design of a typical bridge design and the Tacoma Bridge design
Figure 4: Comparison between the design of a typical bridge design and the Tacoma Bridge design

The vortex frequency in the Kármán vortex street is the Strouhal frequency (fs) which is given by:

$$ f_s = \frac{U \cdot S}{D} $$

where is flow velocity, is the characteristic length and is Strouhal number (a dimensionless quantity). Example: For a Reynolds number greater than 1000, is 0.21. In the case of the Tacoma Bridge, was 8 ft. and was 0.20.

Bridge Design Using Simulation

After the Tacoma Narrows Bridge collapse, the new bridge was redesigned (based on lessons learned) and rebuilt in 1950 (Fig. 4). The newly built bridge incorporated open trusses (triangular), stiffening struts and allowed the wind to flow freely through openings in the roadbeds. Compared to the previous design, the twisting that developed in the new bridge was considerably less severe.

Because of the disaster of the Tacoma Narrows Bridge, the Whitestone Bridge in the US was strengthened by adding trusses and openings below road decks to decrease oscillations, and these are found to be working even today. The idea of using dynamic and modal analysis for the design of bridges received much greater impetus after this disaster.

The deflection theory serves as a model for complex analytical methods used by many structural engineers to obtain stresses, deflections, etc. This eventually led to the development of finite element analysis (FEA) as a generic tool for designing civil engineering structures.

Explore FEA in SimScale

Nowadays, in bridge design, engineering simulation plays a crucial part in the testing process. Using CFD to simulate wind loads and FEA to investigate stresses and the structural behavior of bridges, engineers can prevent failures like the Tacoma Narrows Bridge collapse and build better and stronger bridges and buildings.

Set up your own cloud-native simulation via the web in minutes by creating an account on the SimScale platform. No installation, special hardware, or credit card is required.

The post Why the Tacoma Narrows Bridge Collapsed: An Engineering Analysis appeared first on SimScale.

]]>
Why Did the Titanic Sink? An Engineer’s Analysis https://www.simscale.com/blog/why-did-titanic-sink-engineer/ Fri, 05 Jan 2018 10:52:52 +0000 https://www.simscale.com/?p=12359 Why did the Titanic sink? Learn about the hull's design, material flaws, and what could have been done better to prevent the...

The post Why Did the Titanic Sink? An Engineer’s Analysis appeared first on SimScale.

]]>
On April 14th, 1912, the R.M.S. Titanic was on its maiden voyage from Southampton, England to New York, United States when it collided with a massive iceberg. Of the 2,200 passengers and crew that were aboard, only 705 survived. Despite the builders’ claims that — even under the worst possible conditions at sea — she was unsinkable, it took less than three hours for the Titanic to sink. The ship’s builders even made claims that it should stay afloat for a minimum of 2-3 days if tragedy struck. So why did the Titanic sink? Was it the material failure or bigger design flaws that went unnoticed? Let us analyze why the Titanic sank from an engineer’s perspective.

First, on that note, National Geographic made an interesting CGI on how the Titanic sank:

Figure 1: A CGI by National Geographic of how the Titanic sank

At the time of her construction, the Titanic was the largest ship ever built. It was 230m long, 25 stories high, and weighed 46,000,000 kg. The ship’s turn-of-the-century design and technology included sixteen major watertight compartments in her lower section that could easily be sealed off in the event of a punctured hull and hence deemed her unsinkable.

On the night of April 14th, although the wireless operators had received several ice warnings from other ships in the area, the Titanic continued to rush through the darkness at nearly full steam. Unfortunately, by the time the lookouts spotted the massive iceberg, it was only less than a quarter of a mile off the bow (or front) of the ship, making the crash into the iceberg unavoidable.

Imagine trying to suddenly avoid a head-on collision in a car; that sounds hard, right? The Titanic was about 20,000 times heavier and had the full momentum of all that weight driving it forward. Though the engines were immediately thrown into reverse and the rudder turned hard left, slowing and turning took an incredible distance because of the tremendous weight (or mass) of the ship. Without enough distance to alter her course, the Titanic sideswiped the iceberg, damaging nearly 100 meters of the right side of the hull above and below the waterline [1].

The massive side impact caused enough damage to allow water to flood into six of the sixteen major watertight compartments. As water rushed into the starboard side of the ship’s bow, the ship began to tilt down in front and slightly to the right. However, the back (or stern) of the ship had three large and heavy propellers. Just like a lever, as shown in Figure 2, if the board is not strong enough when one side becomes extremely heavy, and the other end is pushed down—the board breaks.

Lever mechanism, Lever action with two loads on either end illustrating why the Titanic did sink
Figure 2: Lever action with two loads on either end illustrating how the Titanic sank

This is almost exactly what happened on the Titanic, too. The front of the ship started to go into the water, leading to the lifting of the stern of the ship out of it. When the ship was almost at 45 degrees, the stresses in the ship’s midsection increased beyond the material limits of steel (210 MPa). The Titanic almost split wide open in the middle! This is how the Titanic sank. [1]

Discover the benefits of cloud-based CFD simulation with SimScale, in this features overview download.

Exploring the Titanic Why Did the Titanic Sink?

While we have had a glimpse as to what caused the ship to start sinking, was that the only reason? What are the scientific theories that have emerged on why the Titanic sank?

One of the first major scientific insights into the sinking of the Titanic was obtained after a 1991 expedition, called the Imax, to the Titanic wreck. This expedition and the research that followed opened numerous discussions that led to the uncovering of clues on why the Titanic sank. Surprisingly, one of the major discoveries of this expedition included chunks of metal that were once a part of the Titanic’s hull. These Frisbee-sized pieces of steel were about one inch thick with three rivet holes, each one 1.25 inches in diameter [1].

So why did the Titanic sink? As shown in Figure 3, the ship is believed to have sunk due to multiple contributing factors.

Process of sinking of the Titanic, break-up of Titanic
Figure 3: Why did the Titanic sink? A reconstruction of the break-up of the “Titanic” [2]

Titanic Analysis Evidence and Analysis of Why the Titanic Sank

Failure Due to Impact on Hull

One of the key pieces in reconstructing the theory of why the Titanic sank included the pieces of steel that were recovered. Let’s see how some pieces of steel helped answer the question “Why did the Titanic sink?”

Most engineers would have done uniaxial tests during their laboratory sessions. Here a specimen, shaped like a dog bone, is pulled to understand how the material changes shape (or deforms) for the applied load. This is continued until the specimen breaks into two pieces. While materials like aluminum undergo ductile fracture, others like cast iron show no yielding and are brittle. For more information on the brittle-ductile–yield criterion, please read the article “What is the meaning of von Mises stress and yield condition?”

In spite of the captain of the ship trying his best to slow down, the huge mass and momentum meant that the Titanic was still moving at a powerful speed when it impacted the iceberg. This high-speed impact was the start of the disaster. When the Titanic collided with the iceberg, the hull steel and wrought iron rivets failed, due to “brittle fracture”.

Most often, for many commonly used structural materials, impact at extremely high speeds results in brittle fracture without any yielding (or plastic deformation). This is a type of catastrophic failure in structural materials, the causes of which include low temperature, high-impact loading, and high sulfur content.

You guessed it right! On the night of the Titanic disaster, all three factors were present. The water temperature was below freezing, the Titanic was traveling at a high speed on impact with the iceberg, and the hull steel contained high levels of sulfur. It is here where the chunk of iron discovered during the expedition played a major role in providing the hint that the brittle fracture of the hull steel contributed to the disaster. The condition of the edges of the recovered piece of steel was noted to be jagged, almost shattered (like broken china), and sharp upon cleaning it. This brittle fracture of hull steel is probably what the survivors of the disaster then described as a loud noise that sounded like breaking china. Today, typical high-quality ship steel is more ductile and deforms rather than breaks [1]. Astonishingly, scientists discovered that the metal pieces showed no evidence of bending or deformation, they simply shattered! This is one of the main answers to the question “Why did the Titanic sink?”

Laboratory Testing of Hull Materials

In order to confirm this hypothesis on why the Titanic sank, scientists subjected a cigarette-sized specimen/coupon from the pieces to the Charpy test. This is a highly popular test to measure the brittleness of a material. It is run by holding the specimen against a steel backing and striking it with a 30-kg pendulum with a 0.75-meter-long arm. The pendulum’s point of contact is instrumented, with a readout of forces electronically recorded in millisecond detail.

A piece of modern, high-quality steel was tested along with the coupon from the hull steel. Both coupons were placed in a bath of alcohol at -1°C to simulate the conditions on the night of the Titanic disaster. When the coupon of the modern steel was tested, the pendulum swung down and halted with a thud; the test piece had bent into a “V”. However, when the coupon of the Titanic steel was tested, the pendulum struck the coupon with a sharp “ping”, barely slowed, and continued upon its swing; the sample, broken into two pieces, sailed across the room [1].

Results of charpy test after exploration of why did the Titanic sink
Figure 4: Results of the Charpy test for modern steel which was left unbroken (left) and Titanic steel which split into two pieces (right) [1]. Answering the question “Why did the Titanic sink?”

The pictures above show the two coupons following the Charpy test confirming the brittleness of the Titanic’s hull steel. When the Titanic struck the iceberg, the hull plates did not deform, as they should have. Instead, they fractured! This leaves us wondering if the designers anticipated this fracture, and it contributes to the reasons why the Titanic sank.

Did Chemistry Have an Effect?

In the search for answers to the question “Why did the Titanic sink?”, the steel from the Titanic was further analyzed for chemical components and was found to contain high levels of both oxygen and sulfur, which implied that the steel was semi-kilned, low-carbon steel, made using the open-hearth process. If one had a powerful microscope to zoom in on the dimensions of order or micrometers, you would see that steel shows a grain structure, as shown in Figure 5.

High sulfur content disrupts the grain structure of steel, leading to an increase in its brittleness. When sulfur combines with magnesium in the steel, it forms stringers of magnesium sulfide which act as “highways” for crack propagation. Although most of the steel used for shipbuilding in the early 1900s had a relatively high sulfur content, the Titanic’s steel was particularly high, even for those times [3].

Grain structure of steel used in titanic
Figure 5: Grain boundary structure in steel

While the material is normally quite ductile, the addition of oxygen causes the material to transition from ductile to brittle in nature. This proved the plausibility of brittle fracture of the hull steel. It is a known fact that high oxygen content in steel leads to an increased ductile-to-brittle transition temperature, which was determined as 25°C to 35°C for the Titanic steel. Most modern steels would need to be chilled below -60°C before they exhibited similar behavior.

Further Design Flaws

Material flaws were not the only factors that led to the sinking of the Titanic and hence are not the complete answer to the question “Why did the Titanic sink?” The design of the ship was not nearly good enough to deem it an unsinkable ship. The watertight compartments in the ship’s lower section were not exactly watertight, in any sense. The lower section of the Titanic was divided into sixteen major watertight compartments that could easily be sealed off if part of the hull was punctured and leaking water. These watertight compartments, which made the ship designers claim that the ship was unsinkable, were only watertight horizontally.

Major flaw in the design of Titanic, why did the Titanic sink? image showing how the Titanic sank
Figure 6: RMS Titanic: key design flaw explaining why the Titanic sank so rapidly. It is apparent that as the water filled up one of the compartments, it entered into the other compartment from the top. Not exactly watertight!

The tops of these compartments were open, and the walls extended only a few feet above the waterline [3]. In order to contain water within the damaged compartments, it was imperative that the walls of the watertight compartments positioned across the width of the ship be a few feet taller. Although this is not the reason why the Titanic sank, without this design flaw it would have slowed down the sinking process, possibly allowing enough time for nearby ships to help.

The collision with the iceberg damaged the hull portion of six of these sixteen compartments, and the compartments were immediately sealed. But as the water filled these compartments, the ship began to pitch forward from the weight of the water in this area of the ship, and the compartments began to spill over into adjacent compartments due to the horizontal watertight nature. The bow compartments were extensively flooded, and subsequently, the entire ship was flooded, causing the Titanic to be rapidly pulled below the waterline.

The watertight compartments, rather than countering the damage done by the collision with the iceberg, contributed towards accelerating the disaster by keeping the flood waters in the bow of the ship. Without the compartments, the Titanic would have remained horizontal as the incoming water would have spread out. Eventually, even in this case, the ship would have sunk, but she would have remained afloat for a few more hours before capsizing [1]. Scientists maintain that this amount of time would have been sufficient for nearby ships to reach the Titanic’s location and all of her passengers and crew could have been saved.

What Could Have Been Done Better?

The Titanic disaster serves as a perfect example of how engineering flaws can have catastrophic effects. Analyzing the answers to the question “Why did the Titanic sink” takes us to the conclusion that had the design of the ship and the materials chosen been better, the disaster could have been easily warded off.

If the ship had double bottoms, constructed by taking two layers of steel that span the length of the ship and separating them by five feet of space, extending up the sides of the hull, the bottom plate of the hull would have been punctured without damage incurred to the top plate. With a double bottom, the chance that a punctured hull would allow water into the watertight compartments is minimized.

By extending the double bottoms up the sides of the hull, the watertight compartments could remain undamaged. The addition of a layer of steel to the sides of the ship ensures that in the event of an iceberg or a collision with another ship, only the space between the inner and outer sidewalls would flood with water, barely puncturing the hull. Also, if the transverse bulkheads of the watertight compartments were raised, the spilling of water over the tops of the bulkheads into adjacent, undamaged compartments could have been avoided, as the ship pitched forward under the weight of the water in the bow compartments.

Here it is, an engineer’s analysis of why the Titanic sank. Although it is important to understand the errors of the past, it is crucial to make sure they are not repeated in the future. A proper design process can prevent such catastrophes.

If you’d like to read more on this topic, here are other articles that also talk about why the Titanic sank:

There are many more out there that answer the question “Why did the Titanic sink?”

Explore FEA in SimScale

Using Modern Tools of FEA and CFD for Ship Design

Today, with advanced tools at hand, you can use CFD and FEA simulations to virtually test a ship’s design to make sure such catastrophes don’t happen in the future.

CFD analysis of water flow around the keel of a sailing yacht with SimScale
Figure 7: CFD analysis of water flow around the keel of a sailing yacht in SimScale

If you want to explore further the reasons why the Titanic sank, you can use the SimScale cloud-based simulation platform to analyze the stresses on the ship’s hull due to the water, for example. This CAD model of the Titanic is already uploaded to the platform and you can just copy the project and set up a simulation. To discover all the features provided by the SimScale cloud-based simulation platform, download this overview.

To learn how to run simulations with SimScale, you can watch the recording of the first session of the CFD Master Class. Just fill out this short form, and it will play automatically.

Set up your own cloud-native simulation via the web in minutes by creating an account on the SimScale platform. No installation, special hardware, or credit card is required.

References

  • Gannon, Robert, (February 1995). “What Really Sank the Titanic.” Popular Science, 246(2), pp. 49-55
  • Woytowich, Richard, (April 2012). “Titanic Sinking Tied to Engineering, Structural Failures”. Retrieved from https://www.huffingtonpost.in/entry/titanic-sunk-new-theory_n_1412622
  • Hill, Steve. The Mystery of the Titanic: A Case of Brittle Fracture? Materials World, 4(6), pp. 334-335

The post Why Did the Titanic Sink? An Engineer’s Analysis appeared first on SimScale.

]]>
When NASA Lost a Spacecraft Due to a Metric Math Mistake https://www.simscale.com/blog/nasa-mars-climate-orbiter-metric/ Mon, 18 Dec 2017 09:50:06 +0000 https://www.simscale.com/?p=12371 This article explains how NASA lost a spacecraft due to a mistake with metric units and unit conversion. Learn about the Mars...

The post When NASA Lost a Spacecraft Due to a Metric Math Mistake appeared first on SimScale.

]]>
In September of 1999, after almost 10 months of travel to Mars, the Mars Climate Orbiter burned and broke into pieces. On a day when NASA engineers were expecting to celebrate, the ground reality turned out to be completely different, all because someone failed to use the right units, i.e., the metric units! The Scientific American Space Lab made a brief but interesting video on this very topic.

NASA’s Lost Spacecraft The Metric System and NASA's Mars Climate Orbiter

The Mars Climate Orbiter, built at a cost of $125 million, was a 638-kilogram robotic space probe launched by NASA on December 11, 1998, to study the Martian climate, Martian atmosphere, and surface changes. In addition, its function was to act as the communications relay in the Mars Surveyor ’98 program for the Mars Polar Lander. The navigation team at the Jet Propulsion Laboratory (JPL) used the metric system of millimeters and meters in its calculations, while Lockheed Martin Astronautics in Denver, Colorado, which designed and built the spacecraft, provided crucial acceleration data in the English system of inches, feet, and pounds. JPL engineers did not take into consideration that the units had been converted, i.e., the acceleration readings measured in English units of pound-seconds^2 for a metric measure of force called newton-seconds^2. In a sense, the spacecraft was lost in translation.

artist's conception of the Mars Climate Orbiter NASA spacecraft
Figure 1: Artist’s conception of the Mars Climate Orbiter. Source: NASA/JPL/Corby Waste – Wikimedia Commons

Before venturing further into what happened on that dreaded day, let’s try to understand the different units of measurement and how they came into use in various regions across the globe. In the past, various regions of the world followed the measurement systems and units that were most convenient for them. For example, in one part of the world, the cycle of the sun was assumed to be a measure of time, whereas elsewhere, it was the lunar cycles that were used to define time. Additionally, the lack of communication tools prevented scholars from communicating, discussing, and comparing ideas with scholars across the globe. Thus, over the course of centuries, different units and measuring standards have evolved independently.

As the world has grown closer, the need for a single unified system of units has emerged. Credit to several developments in the metric system can be dated back to the French revolution when it was first envisioned. Subsequently, two platinum standards were created representing the meter and the kilogram in the Archives de la République in Paris. This can be considered the first step the development of the present International System of Units.

Following the French revolution, Johann Carl Fredrich Gauss, a German mathematician, strongly promoted the use of this metric system. Alongside meters and kilograms, he added the “seconds” defined in astronomy, as a coherent system of units for the physical sciences. James Clerk Maxwell and Sir Joseph John Thomson, through the British Association for the Advancement of Science (BAAS), carried forward Gauss’ initiative to formulate the requirement for a coherent system of units with base units and derived units. The CGS system, a three-dimensional coherent unit system based on the three units—centimeter, gram and second—using prefixes ranging from micro to mega to express decimal sub-multiples and multiples, emerged because of their efforts. In 1889, the first General Conference on Weights and Measures (CGPM) sanctioned the international prototypes for the meter and the kilogram. Together with the astronomical second as the unit of time, these units constituted a three-dimensional mechanical unit system, just like the CGS system, but with the base units as meter, kilogram and second.

It was Giovanni Giorgi, an Italian physicist and electrical engineer, who proved that it is possible to combine the mechanical units of this meter–kilogram–second system with the practical electric units to form a single coherent four-dimensional system by adding to the three base units, a fourth base unit of an electrical nature, such as the ampere or the ohm, and rewriting the equations occurring in electromagnetism in the so-called rationalized form. Following these developments, in 1939, the four-dimensional system based on the meter, kilogram, second and ampere was recommended to the Consultative Committee for Electricity and Magnetism (CCEM) and was approved by the International Committee for Weights and Measures (abbreviated CIPM from the French Comité international des poids et mesures) in 1946. Following suit, Ampere, Kelvin, and Candela were added as base units in 1954, and Mole was added as the 7th base unit in 1971. Today, there are seven base units: Meter (Distance), Kilogram (Weight), Seconds (Time), Ampere (Electric current), Kelvin (Temperature) and Candela (Luminosity).

USA & NASA SI in the United States

If one travels to the US, one will notice these changes immediately; there are miles instead of kilometers, pounds instead of kilograms, and so on. For almost 22 years of my life, I had used kilograms and when I went to live in the US, the “pound” was totally new to me. While I could predict how much I would get if I bought a kilogram of an item, I had no sense of what one pound meant. The US remains one of only seven countries where SI units are not adopted.

The American system of measuring distance in inches, feet, and yards is based upon the units from England, which is where the first settlers came to the US on the Mayflower. While much of the rest of the world uses the metric system of centimeters, meters, and kilometers, the US has continued to use the English units. One foot is the same as 12 inches, and a yard is 36 inches—and the confusion continues. In metric, 1 meter is 100 centimeters, and a kilometer is 1000 meters. However, it is undeniable today that a large number of multinationals and international businesses work with and/or in the United States. This makes it even more important to be able to use common units of measurement.

Comprehending the overwhelming advantages of the metric system, the US Congress adopted SI units as the preferred measurement system in 1975 through the “Metric Conversion Act” which was signed by US President Gerald Ford. However, the act also allowed the use of US customary units. Further on, in the 1980s, the federal government tried to introduce metric in the United States. Speedometers on the cars from that time showed both miles per hour and kilometers per hour. However, these attempts at changing to metric were not successful.

Even though the US Congress has adopted SI as the preferred measurement system for the United States, the vast majority of businesses continued to use US customary units. This reservation against metric, however, changed almost instantaneously, at least at the best space agency in the world in 1999. This change occurred after a disaster investigation board reported that NASA’s Mars Climate Orbiter burned up in the Martian atmosphere.

NASA’s Lost Spacecraft NASA's Mars Climate Orbiter Disaster

mars climate orbiter launch
Figure 2: A Boeing Delta II 7425 expendable launch vehicle lifts off with NASA’s Mars Climate Orbiter on Dec. 11, 1998

A NASA review board found that the problem was in the software controlling the orbiter’s thrusters. The software calculated the force that the thrusters needed to exert in pounds of force. A second piece of code that read this data assumed it was in the metric unit—“newtons per square meter”.

During the design phase, the propulsion engineers at Lockheed Martin in Colorado expressed force in pounds. However, it was standard practice to convert to metric units for space missions. Engineers at NASA’s Jet Propulsion Lab assumed the conversion had been made. This navigation mishap pushed the spacecraft dangerously close to the planet’s atmosphere where it presumably burned and broke into pieces, killing the mission on a day when engineers had expected to celebrate the craft’s entry into Mars’ orbit.

The contributing factors that led to the disaster, as reported by the Mars Climate Orbiter failure board, were eight-fold. According to NASA’s board, errors were undetected within ground-based computer models of how small thruster firings on the spacecraft were predicted and then carried out on the spacecraft during its interplanetary trip to Mars. Furthermore, the board added that the operational navigation team was not fully informed of the details of the way that Mars Climate Orbiter was pointed in space, as compared to the earlier Mars Global Surveyor mission.

The initial error was made by contractor Lockheed Martin Astronautics in Colorado, which, like the rest of the U.S. launch industry, used English measurements. The contractor, by agreement, was supposed to convert its measurements to metrics. The systems engineering function within the project, whose responsibility was to track and double-check all interconnected aspects of the mission, was not robust enough. The board added that this was exacerbated by the first-time handover of a Mars-bound spacecraft from a group that constructed it and launched it to a new, multi-mission operations team.

Mars Climate Orbiter Cartoon
Figure 3: Newspaper cartoon depicting the incongruence in the units used by NASA and Lockheed Martin scientists that led to the Mars Climate Orbiter disaster. (Source: Slideplayer.com)

NASA's Miscalculations Other Instances of Conversion Errors

Gimli Glider

This was not the only disaster in history that was directly caused by conversion errors. 1983 is famous for the “Gimli Glider” incident, in which Air Canada’s Boeing 767 jet ran out of fuel mid-flight because of a mistake in figuring out the fuel supply of the airline’s first aircraft using metric measurements.

Canada was one of the countries that employed the imperial system until 1970 when the nation began to change over to metric. Metrication (as it was called) took some time—about fifteen years or more. One of the industries that were late to change over was the airline industry, which was mainly due to the expense and longevity of the equipment.

The pre-flight fueling protocol of the flight required to convert volume (liters) into mass (kilograms or pounds, depending on the system in use) to estimate the amount of fuel required. Instead of figuring out how many liters the plane needed to hit the required payload of 22,300 kg, the crew calculated how many liters were needed to hit 22,300 pounds. This was half the quantity of fuel required, which meant that the flight only had enough fuel to make it halfway to the destination. This turned out to be a major—and potentially life-threatening—problem as the vehicle in question was an airplane cruising at 12,500 meters above the ground.

Luckily for all on board, the pilot had ten years of glider training under his belt, and his co-pilot knew the surroundings quite well. The skilled pair was able to land the 767—gliding the last 100 kilometers, ensuring the safety of everyone on board.

Lesser Known Incidents

There have been several lesser-known occurrences of conversion mishaps. The Institute for Safe Medication Practices reported an instance where a patient had received 0.5 grams of Phenobarbital (a sedative) instead of 0.5 grains because the recommendation was misread. A grain is a unit of measure equal to about 0.065 grams. The Institute emphasized that only the metric system should be used for prescribing drugs.

In yet another event, an aircraft was more than 13,000 kilograms overweight. In 1994, the FAA received an anonymous tip that an American International Airways (now Kalitta Air, a cargo airline) flight had landed 15 tons heavier than it should have. The FAA investigated and discovered that the problem was in a kilogram-to-pounds conversion (or lack thereof).

Finally, it is worth mentioning that even Columbus had conversion problems. He miscalculated the circumference of the earth when he used Roman miles instead of nautical miles, which is part of the reason he unexpectedly ended up in the Bahamas on October 12, 1492, and assumed he had hit Asia.

NASA’s Lost Spacecraft Units in FEM

As one would have noticed, there are no predefined units when using FEM software. It is left to the user to ensure that the right conjugates are used. If the unit used for length is meter, then the right units for other aspects of mechanical units are kilograms and seconds. In contrast, if the units are millimeters, then the right units are milligrams and milliseconds, and so on. Every time you think about setting up your simulations, you have to give the units a thought!

consistent units to be used in FEM, metric
Table 1: Consistent units to be used in FEM simulations (Source: Eng-Tips)

Explore FEA in SimScale

When using an FEA or CFD simulation software, be aware that not all the solutions on the market offer both the metric and the imperial units. To help our users avoid making design mistakes, SimScale supports both the metric and the imperial systems, which you choose in the first step of creating your simulation. All you have to do is make sure your collaborators use the same system.

Set up your own cloud-native simulation via the web in minutes by creating an account on the SimScale platform. No installation, special hardware, or credit card is required.

References

The post When NASA Lost a Spacecraft Due to a Metric Math Mistake appeared first on SimScale.

]]>
Why The Swedish Vasa Ship Sank https://www.simscale.com/blog/vasa-ship-sank/ Thu, 07 Dec 2017 14:15:15 +0000 https://www.simscale.com/?p=12379 This article explains what went wrong in the planning and construction of the Vasa ship and how the lack of scientific methods...

The post Why The Swedish Vasa Ship Sank appeared first on SimScale.

]]>
It was 4 p.m. on August 10th, 1628, and the Vasa ship had barely left the docks of Stockholm harbor on its maiden voyage. Only 1300 meters into its voyage, a light gust of wind toppled the ship over on its side. As water flooded through the gun portals of the ship, it sank in the shallow waters of Stockholm harbor and lay there at 32 meters deep, forgotten. In 1956, the Vasa ship was found by Anders Franzen, a Swedish marine technician and amateur naval archaeologist. It was salvaged between 1959-61 and can be found today in the museum that was specially built for it.

What happened on that fateful day of August 10th? Was it expected? Why did the Vasa ship sink? Let’s dive deeper into the heart of the mystery. Several videos analyzing the Vasa ship can be found on YouTube. Here is a particularly brief yet interesting one:

As we go ahead, we will need to understand the terminologies related to shipbuilding. Hence, defining some terms visually could be the best solution.

numbers point towards areas where Ship terminology is used to describe the Vasa Ship
Figure 1: Terminology related to the different parts of a ship (Source: Keel of a ship)
  1. Mainsail
  2. Staysail
  3. Spinnaker
  4. Hull
  5. Keel
  6. Rudder
  7. Skeg
  8. Spar
  9. Spreader
  10. Shroud (sailing)
  11. Sheet
  12. Boom
  13. Mast
  14. Spinnaker pole
  15. Backstay
  16. Forestay
  17. Boom vang

The Vasa Ship The Building of the Vasa Warship

We can never get the full story without first understanding the events that unfolded relating to the building of the ship and the historical time in which it was built. It was on the 16th of January, 1625, that King Gustav II Adolph of Sweden directed Admiral Fleming to sign a contract with the shipbuilders of Stockholm (Hendrik and Arend Hybertsson) to build four ships. Two smaller ships (108-ft keel length) and two bigger ships (135-ft keel length) were to be built over the course of four years. In the months that followed, King Gustav changed his orders several times, leading to total chaos and confusion for the builders.

The Swedish Navy lost ten ships on the 10th of September 1625, which led the king to order that the two smaller ships be built on an accelerated schedule to compensate for those they had lost. One can’t blame him, as those were the days when various parts of the world were colonized, and naval forces meant strength. These were meant to be two small ships (111-ft and 135-ft ships). Please note that the dimensions stated relate to the keel length unless explicitly specified otherwise.

On one front, the construction of the 111-ft ship with a single gun deck began. Simultaneously, on the other, the king received news that the Danish were building a ship with two gun decks. How did the king react to this one-upmanship? If you know anything about the egos of kings and dictators, you will have guessed it right. He immediately ordered Admiral Fleming to build a 135-ft ship with two gun decks. Until then, no one in Sweden had ever built a ship with two decks. This development from one- to two-deck warships marked a significant change in Naval architecture between 1600–1700.

While the original Vasa was a traditional architecture that was commonly used in shipbuilding, it is believed that it is likely that no specifications, crude designs, or plans were made. Hybertsson was an experienced shipbuilder and would likely have taken it on as another standard, traditional job. However, once the circumstances changed, no modified plans were found for the larger, more complex version of the Vasa. Under time pressure, it is believed that Henrik Hybertsson just “scaled up” the dimensions of the original 108-ft ship to meet the length and breadth of the new 135-ft Vasa. Since this was totally unplanned, the width of the upper parts of the ship was wider than originally planned by one foot and five inches. At this point, the Vasa ship was becoming much wider at the top than the bottom. The ship’s center of gravity was much higher than designed.

buoyant force illustrated on vasa ship for stability chart
Figure 2: Ship stability – A delicate balance between gravity and buoyancy. (Source: Wista Tutor)

At this point, you must already be sensing the impending disaster and wondering how everyone else failed to notice. Time can be a funny thing! The Vasa ship was a disaster waiting to happen. Additionally, it did not help that the primary designer, Henrik Hybertsson, fell ill and died in 1627, almost one year before the Vasa was completed.

The plans were undocumented, there were no detailed specifications, and five different teams were working on the hull without any intercommunication! This was one of the biggest projects in Sweden at that time, and it was a total disaster just waiting to happen.

Vasa Ship Was the Vasa Ship Tested?

While the Vasa ship already seemed unstable at the harbor, at this point in history, there were no known methods to measure the stability, center of gravity, or heeling (or toppling) characteristics. Most captains simply used trial and error to understand the best operational characteristics.

Admiral Fleming and Captain Hannson (Vasa’s captain) ran a test with 30 men running side-to-side (a “lurch” test). After three rounds by the men, the test was stopped at the ship rocked so violently that it was feared that it would heel. However, no one had any ideas to help stabilize the ship. While additional weight below the floorboard would have been one option, there was barely any floor space. In addition, the Vasa had 120 tons—almost twice what was needed to stabilize, and if any more were to be added, this would have sent the lower deck gun portals to the waterline of the ship.

vasa ship weight and stability chart
Figure 3: How adding weights can change the center of gravity of a ship (Source: Ship Inspection)

Even knowing that the ship had stability issues, it was given the go-ahead, which was the result of three factors:

  1. Pressure from King Gustav,
  2. The king of Poland had started a war campaign, and;
  3. No one knew what else to do.

The Vasa Ship Disaster The Dreadful Day for Vasa

The Vasa ship was one of the costliest projects of the time, with hundreds of ornate, gilded, and painted carvings depicting biblical, mythical, and historical themes. It was meant to be the most impressive ship, and no cost was spared. However, all these extra features only further raised the ship’s center of gravity!

On the 11th of August 1628, the Vasa left Stockholm harbor. It barely made it two nautical miles when it was faced with a wind gust of eight knots. The wind was so light that the sails were extended by hand and just one person was sufficient to hold the sheets out. Even with such a light wind, the ship heeled (toppled) on its side, water filled the ship through its gun portals, and it eventually sank in the harbor, taking 53 lives with it. The captain of the ship survived the incident and was immediately jailed for incompetence. However, a formal hearing was conducted in September of the same year, and the captain and crew were set free, and the charges of incompetence were dropped. However, no exact reason was determined.

Vasa Ship Learning Restoration and Lessons from Vasa

The ship had sunk in the shallow waters of the Baltic sea, and due to the salinity of the water, the wooden vessel survived infestation and degradation. Exactly 333 years later, it was pulled up to the surface of the harbor. After the water and mud were pumped out, and the gun portals sealed, the Vasa ship floated. Today, it stands in a separate museum of its own—Vasa Museum.

Real life Vasa ship at the museum in Stockholm
Figure 4: The Vasa ship on display in Sweden (Source: By JavierKohen [CC BY-SA 3.0], from Wikimedia Commons)

The factors that contributed to the failure of the Vasa ship were plentiful but can be summarized in 4 main reasons:

1. Unreasonable time pressure
2. Changing specifications and lack of documentation or project plan
3. Over-engineering and innovation
4. Lack of scientific methods and reasoning

As evident, one of the primary scientific reasons can be attributed to a lack of stability due to the ship’s raised center of gravity. Without the existence of proper tools for the design and testing phase, building Europe’s mightiest warship and preventing foundering was even harder.

Explore FEA in SimScale

Three centuries later, we have access to all of these tools, but design mistakes are still being made. Engineering simulation software is not always integrated into the design process, mostly due to high costs and lack of specialized training. With the SimScale cloud-based platform, however, engineers working in shipbuilding and boat design can use CFD and FEA simulation via a standard web browser and from any computer. As for the training, there are hundreds of free learning materials available online. An on-demand webinar about multiphase flow is a good start. Just fill out this form and watch it for free.

Set up your own cloud-native simulation via the web in minutes by creating an account on the SimScale platform. No installation, special hardware, or credit card is required.

The post Why The Swedish Vasa Ship Sank appeared first on SimScale.

]]>
How to Calculate Stress and Strain with FEM Software? https://www.simscale.com/blog/stress-and-strain/ Mon, 24 Apr 2017 13:11:11 +0000 https://www.simscale.com/?p=7095 FEM software users often wonder how stress and strain are calculated, and how these quantities are related to the nodes....

The post How to Calculate Stress and Strain with FEM Software? appeared first on SimScale.

]]>
FEM software users often wonder how stress and strain are calculated, and how these quantities are related to the nodes. Which of these are calculated first? An ad-hoc answer to the question is strain. However, the real answer is neither. Confused? In short, a FEM software calculates the displacements and reaction forces at the nodes. This is later used to calculate the strains and then the stresses. If you’re wondering what exactly the finite element method is and how to learn it, I recommend that you read these two blog articles: “What is FEA?” and “How can I learn FEA?

In order to understand how and when stress and strain are calculated, it is important to first understand how FEA software—including SimScale—functions. Most FEM software solutions have a similar workflow, which is imperative for knowing how and when quantities are calculated.

Stress and Strain Mathematics of the Solution Process in FEA

Originally, computers were designed to solve matrix equations. Numerical techniques have been developed over the last several decades with the goal of solving linear and nonlinear matrix equations. Today, one can say that linear systems can definitely be solved. However, the same cannot be assumed for a nonlinear system of equations.

In addition, one of the aspects of efficient meshing includes assignment of node numbers such that the total bandwidth of the matrix is reduced to a minimum. Thus, equation systems are commonly stored in “Sparse” formats. For more details on sparse matrices and solvers, the blog article on “Direct and iterative solvers” is a recommended reading. An entire branch of applied mathematics is dedicated to improving solvers and algorithms for solving nonlinear system of equations.


The purpose of a helmet is to protect the person who wears it from a head injury during impact. In this project, the impact of a human skull with and without a helmet was simulated with a nonlinear dynamic analysis. Download this case study for free.


FEA Stress & Strain The Process of FEA

So far, it is understood that FEM converts a PDE into a system of equations. This system of equations (in matrix form) is solved using complicated algorithms and solvers. The question remains as to what is one solving for? Is one solving for displacements or velocities or strains or stresses or forces?

In short, we are generally solving for unknowns at the nodes. In mechanical problems, these unknowns are commonly displacements, and in thermal problems, the unknown is usually temperature. Further quantities like stress and strain are obtained only as part of post-processing. The overall process of FEA can be divided into several steps:

  • Preparation of geometry
  • Pre-processing
  • Post-processing for outputs
  • Analysis

In the coming section, each step involved in an FEA program and when/how stress and strain are calculated will be discussed in more detail. The overall procedure remains the same, irrespective of the software used.

Preparation of Geometry

The preparation of geometry is referred to as development of a CAD model. Most often, CAD models can be obtained through online platforms like GrabCAD. The considered CAD model needs to be a good representative of the reality. It is important to be cautious, as many of the models obtained from such forums have engravings. The original authors often engrave their name on the CAD model, and the presence of such engravings can cause issues with automatic mesh generators, which leads to extremely small or distorted meshes and subsequently errors in the solution.

Pre-Processing

Pre-processing involves several important issues, which start from meshing. SimScale offers automatic mesh generators, including the possibility for mesh refinements in layers or at regions around possible singularities.

The mesh creation is an integral part of preparations for the solution process. Meshing also implies to choosing an interpolation function. For example, as discussed in the blog article on “Modeling Elastomers Using FEM: Do’s and Don’ts,” second-order interpolation functions are preferred over first-order—especially for modeling hyperelastic materials. In contrast, using first-order tetrahedrons can be disastrous! Hence, choosing the right mesh and interpolation type governs the accuracy.

In addition, pre-processing also involves choosing the appropriate boundary conditions, material properties, and constraints when selecting a mesh.

Solution Process

The PDE governing the physics has been converted to an integral form and later was converted into a matrix form. The matrix contribution of each element is calculated and assembled into a global system of equations. This global equation is eventually sent to a solver, which takes the equation for nodal displacements and reactions. While in linear problems, the solver solves for the displacements and reactions in one step, while the nonlinear problems need several iterations to solve for the same.

Once the nodal displacements have been solved, it is time to calculate the stresses that are shown for plotting. A flag is generally used to inform the routine that the solution process is complete and stresses are being calculated for plotting.

FEA Stress Analysis Stress Projection Process

The stresses are calculated at each of the integration points in the element and later projected to the nodal points, as illustrated in Fig. 03.

Stress and strain, Illustration of stress projection from Gauss points to the nodal points. A FEM mesh (left) and its blown up image showing the projection of stress (right)
Fig. 03: Illustration of stress projection from Gauss points to the nodal points. A FEM mesh (left) and its blown-up image showing the projection of stress (right)

Fig. 03 illustrates a simple case of a four-noded quadrilateral element with 2×2 quadrature scheme. Irrespective of the type of element and integration scheme used, the overall procedure remains the same and the displacements are known at the nodal points. Using the interpolation (or ansatz or shape) functions, the displacements are first calculated at the integration points.

Following this, the kinematic quantities, like strains and deformation gradients, are calculated at the Gauss point. Then, using these kinematical quantities and material properties, the stress is calculated at the Gauss point. Following this, using the interpolation function for that element, this stress is projected to the nodes.

Now, as shown in Fig. 03, each projection might lead to slightly (or even completely) different values of stresses at the node. Most FEA programs perform a nodal averaging so that the projected values are averaged and only one value is considered for each node. This is shown in red in Fig. 03.

Caution needs to be demonstrated, as frequently—especially in fracture problems—displacement plots are discussed rather than stresses. However, due to the above averaging, stresses might appear smooth, there might be significant jumps, and the mesh might not be good enough to obtain stress convergence. While averaging is a nice tool for colorful pictures, switching off averaging could demonstrate whether the mesh leads to reasonable results.

FEM Software Conclusion: Calculating Stress and Strain with FEM Software

cardiovascular stent stress analysis with the SimScale FEM software
Cardiovascular stent stress analysis with the SimScale FEM software

To summarize the discussions, displacements and reaction forces are the fundamental quantities that are being solved in any FEM computation. Both stresses and strains are calculated as post-processing quantities once a converged solution is obtained for the nodal displacements.

Stress and strain are calculated at the Gauss points and further on projected to the nodal points. In this scenario, it is vital that extreme caution is exercised since most FEM software solutions perform a nodal averaging. Such averaging will disguise any mesh effects that need to be addressed for accurate solutions.

I hope this article helped you better understand stress and strain. If you’d like to put your knowledge into practice, SimScale offers the possibility to carry out finite element analyses in the web browser. To learn about all the features provided by the SimScale cloud-based simulation platform, download this document.

The post How to Calculate Stress and Strain with FEM Software? appeared first on SimScale.

]]>