First of all, please take this text as a written chat between you an me, i.e. a practising engineer that has already taken the journey from college to performing actual engineering works using finite element analysis and has something to say about it. Picture yourself in a coffee bar, talking and discussing concepts and ideas with me. Maybe needing to go to a blackboard (or notepad?). Even using a tablet to illustrate some three-dimensional results. But always as a chat between colleagues.
Please also note that I am not a mechanical engineer, although I took many undergraduate courses in this discipline. I am a nuclear engineer with a strong background in mathematics and computer programming. I went to college between 2002 and 2008. Probably a lot of things have changed since then---at least that is what these “millennial” guys and girls seem to be boasting about---but chances are we all studied solid mechanics and heat transfer with a teacher using a piece of chalk on a blackboard while we as students wrote down notes with pencils on sheets of paper. And there is really not much that one can do with pencil and paper regarding mechanical analysis. Any actual case worth the time of an engineer needs to be more complex than an ideal canonical case with a closed-form solution.
::::: {#fig:pendulum}
{#fig:simple width=35%}\
{#fig:hamaca width=60%}
::::: {#fig:pipes}
{#fig:infinite-pipe width=40%}
{#fig:isometric width=58%}
Left: what we are taught in college. Right: a real-life isometric drawing. :::::
divert(-1) Like the pendulums above, we will be swinging back and forth between a case study about fatigue assessment of piping systems in a nuclear power plant and more generic topics related to finite elements and computational mechanics. These latter regressions will not remain just as abstract theoretical ideas. Not only will they be directly applicable to the development of the main case, but they will also apply to a great deal of other engineering problems tackled with the finite element method (and its cousins, [@sec:formulations]).
\medskip
Finite elements are like magic to me. I mean, I can follow the whole derivation of the equations, from the strong, weak and variational formulations of the equilibrium equations for the mechanical problem (or the energy conservation for heat transfer) down to the algebraic multigrid preconditioner for the inversion of the stiffness matrix passing through Sobolev spaces and the grid generation. Then I can sit down and program all these steps into a computer, including the shape functions and its derivatives, the assembly of the discretised stiffness matrix ([@sec:building]), the numerical solution of the system of equations ([@sec:solving]) and the computation of the gradient of the solution (@sec:stress-computation). Yet, the fact that all these a-priori unconnected steps give rise to pretty pictures that resemble reality is still astonishing to me. divert(0)
After finishing college, we feel we can solve and fix the world (if you did not finish yet, you will feel it shortly). But the thing is that we cannot (yet). Once again, take all this information as coming from a fellow that has already taken such a journey from college’s pencil and paper to real engineering cases involving complex numerical calculations. And developing, in the meantime, both an actual working finite-element solver and a web-based pre and post-processor from scratch.dnl1
There are some useful hints that come in handy when trying to solve a mechanical problem. Throughout this text, I will try to tell you some of them.
One of the most important ones is to use your imagination. You will need a lot of imagination to “see” what it is actually going on when analysing an engineering problem. This skill comes from my background in nuclear engineering where I had no choice but to imagine a positron-electron annihilation or an spontaneous fission of an uranium nuclei. But in mechanical engineering, it is likewise important to be able to imagine how the loads “press” one element with the other, how the material reacts depending on its properties, how the nodal displacements generate stresses (both normal and shear), how results converge as the mesh gets denser, etc. dnl And what these results actually mean besides the pretty-coloured figures.2
This journey will definitely need your imagination. We will peek a little bit into equations, numbers, plots, schematics, CAD geometries, 3D\ views, etc. Still, when the theory says “thermal expansion produces normal stresses” you have to picture in your head three little arrows pulling away from the same point in three directions, or whatever mental picture you have about what you understand thermally-induced stresses are. Whatever it is, try to practice that kind of graphical thoughts with every new concept. Nevertheless, there will be particular locations of the text where imagination will be most useful. I will bring the subject up now and again throughout the text.
Another point to observe is that we will be digging into some mathematics. Probably they would be simple and you would deal with them very easily. But chances are you do not like equations. No problem! Just ignore them for now. Read the text skipping them, it should work as well. Very much like learning to drive does not involve a lecture on thermodynamics, it is true that solving problems with finite elements does not require to learn complex mathematics. But both thermodynamics and mathematics are “nice-to-have,” in the same sense that people who know harmony theory enjoy very much more good music than those who do not. So my experience tip is this one: even though you do not strictly need them, keep exercising mathematics. You have used differences of squares in high school, didn’t you? You know (or at least knew) how to integrate by parts. Do you remember what Laplace transforms are used for? Once in a while, perform a division of polynomials using Ruffini’s rule. Or compute the second derivative of the quotient of two functions. Whatever. It should be like doing crosswords on the newspaper. Grab those old physics college books and solve the exercises at the end of each chapter. All the effort will, trust me, pay off later on.
divert(-1) One final comment: throughout the text I will be referring to “your favourite FEM program.” I bet you do have one. Mine is CAEplex (it works on top of Fino, which is free and open source). We will be using your favourite program in this case study to perform some tests and play a little bit. And we will also use it to think about what it means to use a FEM program to generate results that will eventually end up in a written project with your signature. Keep that in mind. divert(0)
Piping systems in sensitive industries like nuclear or oil & gas should be designed and analysed following the recommendations of an appropriate set of codes and norms, such as the ASME\ Boiler and Pressure Vessel Code. This code of practice was born during the late\ 19th century, before finite-element methods for solving partial differential equations were even developed. And much longer before they were available for the general engineering community. Therefore, much of the code assumes design and verification is not necessarily performed numerically but with paper and pencil (yes, like in college). It provides guidance in order to ensure pressurised systems behave safely and properly without necessarily needing to resort to computational tools. Yet combining finite-element analysis with the ASME code gives the cognisant engineer a unique combination of tools to tackle the problem of designing and/or verifying pressurised piping systems.
In the years following Enrico Fermi’s demonstration that a self-sustainable fission reaction chain was possible (actually, in fact, after WWII was over), people started to build plants in order to transform the energy stored within an atom’s nucleus into usable electrical power. They quickly reached the conclusion that high-pressure heat exchangers and turbines were needed. So they started to follow codes of practise like the aforementioned ASME\ B&PVC. They also realised that some requirements did not fit the needs of the nuclear industry. But instead of writing a new code from scratch, they added a new chapter to the existing body of knowledge: the celebrated ASME Section\ III.
After further years passed by, engineers (probably the same people that wrote section\ III) noticed that fatigue in nuclear power plants was not exactly the same as in other piping systems. There were some environmental factors directly associated to the power plant that were not taken into account by the regular ASME code. Again, instead of writing a new code from scratch, people decided to add correction factors to the previously-amended body of knowledge. This is how (sometimes) knowledge evolves, and it is these kinds of complexities that engineers are faced with during their professional lives. We have to admit it, it would be a very difficult task to re-write everything from scratch every time something changes. And, even though sometimes one would like to change how the world works, most of the times there are sounds reasons not to do so.
In each of the countries that have at least one nuclear power plant there exists a national regulatory body who is responsible for licensing the owner to operate the reactor. These operating licenses are time-limited, with a range that can vary from 25 to 60 years, depending on the design and technology of the reactor. Once expired, the owner might be entitled to an extension, which the regulatory authority can grant provided it can be shown that a certain (and very detailed) set of safety criteria are met. One particular example of requirements is that for fatigue in pipes, especially those that belong to systems that are directly related to the reactor safety.
Why are pipes subject to fatigue? Well, on the one hand and without getting into many technical details, the most common nuclear reactor design uses liquid water too extract the heat generated in the fuel rods (coolant) and to slow down fast neutrons born in the fission process (moderator). Nuclear power plants cannot by-pass the thermodynamics of the Carnot cycle thus in order to maximise the efficiency of the conversion of the energy stored in the uranium nuclei into electricity we need to reach temperatures as high as possible. So, if we want to have liquid water in the core as hot as possible, we need to increase the pressure. The limiting temperature and pressure are given by the critical point of water, which is around 374ºC and 22\ MPa. It is therefore expected to have temperatures and pressures near those values in many systems of the plant, especially in the primary circuit (which is in contact with the reactor core) and those that directly interact with it, such as pressure and inventory control system, decay power removal system, feedwater supply system, emergency core-cooling system, etc.
[@Fig:cad-figure] shows the three-dimensional CAD model of a non-real piping system of an imaginary nuclear power plant which will serve as our case study to illustrate the complexities that arise in real-life engineering projects as compared to theoretical pipes drawn on a blackboard.3 There is a valve with a 10-inch inlet and a 12-inch outlet. There are elbows and a tee. There are supports at the end of the pipes and at intermediate locations, etc. And, more importantly, the 12-inch pipe, the valve body and the inlet and outlet nozzles are made of stainless steel while the 10-inch pipe is made of carbon steel. So differential thermal expansion leading to non-trivial mechanical stresses are expected to occur if the temperature distribution changes in time. This indeed happens a lot because nuclear power plants are not always working at 100% of their maximum power capacity. They need to be maintained and refuelled, they may undergo operational (and some incidental) transients, they might operate at a lower power due to load following conditions, etc.
It should be noted that this case study is still a simplification of real-life piping systems, which are far more complex that [@fig:cad-figure]. Also, the analysis which we will perform in this chapter is also far more simple than what nuclear regulatory bodies require in order to grant lifetime extension licenses to plant operators. For instance, we will not discuss the analysis of ASME’s primary stresses nor go deep into the design-based earthquake analysis which should be supposed to occur during the operational transients.
dnl An important part of the analysis that almost always applies to nuclear power plants but usually also to other installations is the consideration of a possible seismic event. Given a postulated design earthquake, both the civil structures and the piping system itself need to be able to withstand such a load, even if it occurs at the moment of highest mechanical demand during one of the operational transients.
As the transients are postulated to occur cyclically during a number of times throughout the life-time of the plant (plus its extension period), mechanical fatigue in these piping systems may arise. This effect can initiate and grow microscopic cracks at the grain level, called dislocations. Cracks can grow at stresses well below the yield level. Once these cracks reach a critical size, the material fails catastrophically [@schijve]. There are currently no complete mechanical models describing fatigue from first principles, thus an empirical approach is used. There are two main ways to approach practical fatigue assessment problems using experimental data:
The first one is suitable for cases where the stresses are nowhere near the yield stress of the material. When plastic deformation is expected to occur, strain-life methods ought to be employed.
For the preset case study, as the loads come principally from operational loads which are not expected to cause plastic deformation, the ASME\ stress-life approach should be used. The stress amplitude ($S$) of a periodic cycle can be related to the number of cycles ($N$) where failure by fatigue is expected to occur. For each material, this dependence can be computed using standardised tests and a family of “fatigue curves” like the one depicted in\ @fig:SN for different temperatures can be obtained.
It should be noted that the fatigue curves are obtained for a particular load case, usually purely-periodic and one-dimensional, which cannot be directly generalised to other three-dimensional cases. Also, any real-life case will be subject to a mixture of complex cycles given by a stress time history and not to pure periodic conditions. The application of the S-N curve data implies a set of simplifications and assumptions that are translated into different possible “rules” for composing real-life cycles. The ASME S-N curves also adopt two safety factors which increase the stress amplitude and reduce the number of cycles respectively. All these intermediate steps render the analysis of fatigue into a conservative computation scheme. Therefore, when a fatigue assessment performed using the fatigue curve method arrives at the conclusion that “fatigue is expected to occur after ten thousand cycles” what it actually means is “we are sure fatigue will not occur before ten thousand cycles, yet it may not occur before one hundred thousand or even more.”
Let us start our journey. Our starting place: undergraduate solid mechanics courses. Our goal: to obtain the internal state of a solid subject to a set of support conditions and loads, i.e. to solve the solid mechanics problem. divert(-1) It was Augustin-Louis Cauchy who formulated for the first time the elasticity equations we use today. We need to simultaneously solve
In any case, what we need to understand (and imagine) from point\ #1 above is that external forces lead to internal stresses. And in any three-dimensional body subject to such external loads, the best way to represent internal stresses is through a $3 \times 3$ stress tensor. This is the first point in which we should not fear mathematics. Trust me, let’s follow good old Augustin-Louis. It will pay back later on.
Does the term tensor scare you? It should not. A tensor is a general mathematical object and might get complex when dealing with many dimensions (as those encountered in weird stuff like string theory), but we will stick here to second-order tensors. They are slightly more complex than a vector, and I assume you are not afraid of vectors, are you? If you recall fresh-year algebra courses, a vector somehow generalises the idea of a scalar in the following sense: a given vector $\mathbf{v}$ can be projected into any direction $\mathbf{n}$ to obtain a scalar $p$. We call this scalar $p$ the “projection” of the vector $\mathbf{v}$ in the direction $\mathbf{n}$. Well, a tensor can be also projected into any direction $\mathbf{n}$. The difference is that instead of a scalar, a vector is now obtained.
So, let me introduce then the three-dimensional Cauchy’s stress tensor: divert(0)dnl What does this mean? It means finding the components of the Cauchy’s stress tensor
$$ \begin{bmatrix} \sigmax & \tau{xy} & \tau{xz} \ \tau{yx} & \sigma{y} & \tau{yz} \ \tau{zx} & \tau{zy} & \sigma_{z} \ \end{bmatrix} $$
\noindent which looks (and works) like a regular $3 \times 3$ matrix. divert(-1)Indeed, we will take advantage of this matrix-like behaviour in [@sec:linearity] below.divert(0) Some brief comments about it:
divert(-1) What does this all have to do with mechanical engineering? Well, once we know what the stress tensor is for every point of a solid, in order to obtain the internal forces per unit area acting in a plane passing through that point and with a normal given by the direction $\mathbf{n}$, all we have to do is “project” the stress tensor through $\mathbf{n}$. In plain simple words:
If you can compute the stress tensor at each point of your geometry, then… Congratulations! You have solved the solid mechanics problem. divert(0)
The full solution involves the nine stresses, out of which there are only six that are different.4 If we manage to compute the principal stresses $\sigma_1$, $\sigma_2$ and $\sigma_3$ we reduce the number to three. We can go further and obtain a single scalar stress “intensity” value by using one of several material yield theories. The two most common ones are those by Tresca and von\ Mises. The former is the maximum absolute difference between all possible combination of principal stresses
$$ \sigma_\text{Tr} = \max \Big[ \left| \sigma_1 - \sigma_2 \right|, \left| \sigma_2 - \sigma_3 \right|, \left| \sigma_3 - \sigma_1 \right| \Big] $$
\noindent and the latter is
$$ \sigma_\text{vM} = \sqrt{\frac{\left(\sigma_1 - \sigma_2 \right)^2+ \left(\sigma_2 - \sigma_3 \right)^2+ \left(\sigma_3 - \sigma_1 \right)^2}{2}} $$
Up to 2010, both sections\ III (nuclear components) and VIII (general pressurised components) of ASME were based on Tresca theory. In newer revisions, section\ VIII switched to von\ Mises while section\ III kept using Tresca.
::::: {#fig:timoshenko}
{#fig:timoshenko-cyl width=48%}
{#fig:timoshenko-eq width=48%}
Figures from Timoshenko seminal book [@timoshenko]. :::::
dnl Given the cylindrical symmetry of the problem, there can be no dependence on the angular coordinate\ $\theta$ (i.e. there can be no torsion). Also, due to the assumption that the pipe is infinite, no result can depend on the axial direction (i.e. it cannot bend along its axis) so there is only one independent variable, namely the radial coordinate $r$. Moreover, there are only two displacement fields that need to be considered: the axial $u_a®$ and the radial $u_r®$. The former is identically zero due to the fact that the cylinder is infinite so it makes no sense to assume the pipe can move any finite value along its axis, rendering a plane strain condition, which is thoroughly discussed in [@nbr03].
divert(-1) The equilibrium equation along the radial direction $r$, also known as the Lamé equation, can be derived with the aid of [@fig:timoshenko-eq] as
$$ \frac{d\sigma_r}{dr} + \frac{\sigmar® - \sigma\theta®}{r} = 0 $${#eq:equilibrium}
Defining the strains as $\epsilon_r= dur/dr$ and $\epsilon\theta = u/r$, the constitutive equations of an isotropic linear material are
\begin{align} \sigma_r &= \frac{E}{(1-\nu)(1-2\nu)} \cdot \Big[ (1-\nu) \cdot \epsilonr + \nu \cdot \epsilon\theta \Big] \ \sigma_\theta &= \frac{E}{(1-\nu)(1-2\nu)} \cdot \Big[ \nu \cdot \epsilon_r + (1-\nu) \cdot \epsilon_r \Big] \ \end{align} then the differential equation [-@eq:equilibrium] can be casted in terms of the axial displacement $u_r®$ as
$$ \frac{d^2 u}{dr^2} + \frac{1}{r} \cdot \frac{du}{dr} - \frac{u}{r^2} = 0 $$ that has the general solution
$$ u® = c_1 \cdot r + \frac{c_2}{r} $$
For the boundary conditions of the particular problem that the radial stress should be equal to the negative of the internal pressure at $r=a$ and null at $r=b$, the axial displacement has the particular solution divert(0)
Remember that when any solid body is subject to external forces, it has to react in such a way to satisfy the equilibrium conditions. The way solids do this is by deforming a little bit in such a way that the whole body acts as a compressed (or elongated) spring balancing the load. Starting from [@fig:timoshenko-eq] and after some mathematics (detailed shown in reference\ [@pipe-linearized]), which is what most of us have already done in college, we can find that the displacement field has the following analytical solution:
$$ u_r® = p \cdot \frac{1+\nu}{E} \cdot \frac{a^2}{b^2-a^2} \cdot \left[ 1-2\nu + \frac{b^2}{r^2} \right]\cdot r $${#eq:ur}
divert(-1) Exercises for the reader:
What does [@eq:ur] mean? Well, that overall the whole pipe expands a little bit radially with the inner face being displaced more than the external surface (use your imagination or wait until we get to [@fig:ur]). But the important thing here is that we have an expression that explicitly tells us how it expands:
That is how an infinite pipe withstands internal pressure. And that is what we are taught in college, which is actually true by the way!
As the solid is deformed, that is to say that different parts are relatively displaced one from another, strains and stresses appear. When seen from a cylindrical coordinate system, the stress tensor (recall [@sec:tensor]) has these features.
We can note that
That is all that what we can say about an infinite pipe with uniform material properties subject to an uniform internal pressure\ $p$. Note that if
\noindent then we would no longer be able to fully solve the problem with paper and pencil, have an explicit equation that tells us how the system reacts to the external loads nor draw any of all the conclusions above. However, at least we have a start because we know that if the pipe is finite but long enough or the temperature is not uniform but almost, we still can use the analytical equations as approximations. After all, Enrico Fermi managed to reach criticality in the Chicago Pile-1 with paper and pencil. But what happens if the pipe is short, there are branches and temperature changes like during a transient in a nuclear reactor? Well, that is why we have finite elements. And this is where what we learned at college pretty much ends.
divert(-1)
Besides infinite pipes (both thin and thick), spheres and a couple of other geometries, there are no other cases for which we can obtain analytical expressions for the elements of the stress tensor. To get results for a solid with real engineering interest, we need to use numerical methods to solve the equilibrium, constitutive and compatibility equations. It is not that the equations are hard per se. It is that the mechanical parts we engineers like to design (which are of course more complex than cylinders and spheres) are so intricate that render simple equations into monsters which are unsolvable with pencil and paper. Hence, finite elements enter into the scene.
But before turning our attention directly into finite elements (and leaving college, at least undergraduate) it is worth some time to think about other alternatives. Are we sure we are tackling your problems in the best possible way? I mean, not just engineering problems. Do we take a break, step back for a while and see the whole picture looking at all the alternatives so we can choose the best cost-effective one?
There are literally dozens of ways to numerically solve the equilibrium equations, but for the sake of brevity let us take a look at the three most famous ones. Coincidentally, they all contain the word “finite” in their names. We will not dig into them, but it is nice to know they exist. We might use
divert(-1) Each of these methods (also called schemes) have of course their own features, pros and cons. They all exploit the fact that the equations are easy to solve in simple geometries (say a cube). Then the actual geometry is divided into a juxtaposition of these cubes, the equations are solved in each one and then a global solution is obtained by sewing the little simple solutions one to another. The process of dividing the original domain into simple geometries is called discretisation, and the resulting collection of these simple geometries is called a mesh or grid. They are composed of volumes, called cells (or elements) and vertices called nodes. Now, grids can be either
a. structured, or b. unstructured
::::: {#fig:grids}
{#fig:continuous width=30%}\
{#fig:structured width=30%}\
{#fig:unstructured width=30%}
Discretisation of a spatial domain, For the same number of cells, unstructured grids can better represent arbitrary shapes. :::::
Back to the three numerical methods, we must say that finite differences is based on approximating derivatives (i.e. differentials) by incremental quotients (i.e. differences). The second one heavily relies on geometrical ideas rather than on pure mathematical grounds. Finally, our beloved finite elements are more strict in the mathematical sense. Actually, a complete derivation of the finite element method can be written in a textbook without requiring a single figure, just like D’Alembert did more than two centuries ago. In any case, it is important to note that finite differences and elements compute results at the nodes of a mesh, whilst finite volumes compute results at the cells of a mesh. Finally, any method may be used in structured grids but only finite elements and volumes are especially suited for working with unstructured grids.
There are technical reasons that justify why the finite element method is the king of mechanical analysis. But that does not mean that other methods may be employed. For instance, fluid mechanics are generally better solved using finite volumes. And further other combinations may be found in the literature.
Before proceeding, I would like to make two comments about common nomenclature. The first one is that if we exchanged the words “volumes” and “elements” in all the written books and articles, nobody would notice the difference. There is nothing particular in both theories that can justify why FVM use “volumes” and FEM use “elements”. Actually volumes and elements are the same geometric constructions. As far as I know, the names were randomly assigned.
The second one is more philosophical and refers to the word “simulation” which is often used to refer to solving a problem using a numerical scheme such as the finite element method. I am against at using this word for this endeavour. The term simulation has a connotation of both “pretending” and “faking” something, that is definitely not what we are doing when we solve an engineering problem with finite elements. Sure, there are some cases in which we simulate, such as using the Monte Carlo method (originally used by Fermi as an attempt to understand how neutrons behave in the core of nuclear reactors). But when solving deterministic mechanical engineering problems I would rather say “modelling” than “simulation.”
divert(-1)
This section is not (just) about different kinds of elements like tetrahedra, hexahedra, pyramids and so on. It is about the different kinds of analysis there are. Indeed, there is a whole plethora of particular types of calculations we can perform, all of which can be called “finite element analysis.” For instance, for the mechanical problem, we can have different kinds of
And then there exist different pre-processors, meshers, solvers, pre-conditioners, post-processing steps, etc. A similar list can be made for the heat conduction problem, electromagnetism, the Schröedinger equation, neutron transport, etc. But there is also another level of “kind of problem,” which is related to how much accuracy and precision we are to willing sacrifice in order to have a (probably very much) simpler problem to solve. Again, there are different combinations here but a certain problem can be solved using any of the following three approaches, listed in increasing amount of difficulty and complexity: conservative, best-estimate or probabilistic.
The first one is the easiest because we are allowed to choose parameters and to make engineering decisions that may simplify the computation as long as they give results towards the worst-case scenario. More often than not, a conservative estimation is enough in order to consider a problem as solved. Note that this is actually how fatigue results are obtained using fatigue curves, as discussed in\ [@sec:fatigue]. A word of care should be taken when considering what the “worst-case scenario” is. For instance, if we are analysing the temperature distribution in a mechanical part subject to convection boundary conditions, we might take either a very large or a very low convection coefficient as the conservative case. If we needed to design fins to dissipate heat then a low coefficient would be the conservative choice. But if the mechanical properties deteriorated with high temperatures then the conservative way to go would be to set a high convection coefficient. A common practice is to have a fictitious set of parameters, each of them being conservative leading individually to the worst case even if the overall combination is not physically feasible.
As neat and tempting as conservative computations may be, sometimes the assumptions may be too biased toward the bad direction and there might be no way of justifying certain designs with conservative computations. It is then time to sharpen our pencils and perform a best-estimate computation. This time, we should stick to the most-probable values of the parameters and even use more complex models that can better represent the physical phenomena that are going on in our problem. Sometimes best-estimate computations are just slightly more complex than conservative models. But more often than not, best-estimates get far more complicated. And these complications come not just in the finite-element model of the elastic problem but in the dependence of properties with space, time and/or temperature, in non-trivial relationships between macro and microscopic parameters, in more complicated algorithms for post-processing data, etc.
Finally, when then uncertainties associated to the parameters, methods and models used in a best-estimate calculation render the results too inaccurate, it might be needed to do a full set of parametric runs taking into account the probabilistic distribution of each of the input parameters. It involves
This kind of computation is usually required by the nuclear regulatory authorities when power plant designers need to address the safety of the reactors. What is the heat capacity of uranium above 1000ºC? What is the heat transfer coefficient when approaching the critical heat flux before the Leidenfrost effect occurs? A certain statistical analysis has to be done prior to actually parametrically sweeping (see\ [@sec:parametric]) the input parameters so as to obtain a distribution of possible outcomes.
We might get into a an infinite taxonomic loop if we continue down this path. So let us move one step closer to our case study in this journey from college theory to an actual engineering problem.
So we know we need a numerical scheme to solve our mechanical problem because anything slightly more complex than an infinite pipe does not have analytical solution. We need an unstructured grid because we would not use Legos to discretise cylindrical pipes. We selected the finite elements method over the finite volumes method, because FEM is the king. Can we pause again and ask ourselves why is it that we want to do finite-element analysis?
\medskip
There exists a very useful problem-solving technique conceived by Taiichi Ohno, the father of the Toyota production system, known as the Five-whys rule. It is based on the fact people make decisions following a certain reasoning logic that most of the time is subjective and biased, and not purely rational and neutral. By recursively asking (at least five times) the cause of a certain issue, it might possible to understand what the real nature of the problem (or issue being investigated) is. And it might even be possible to to take counter-measures in order to fix what seems wrong.
Here is the original example:
Why did the robot stop?
The circuit has overloaded, causing a fuse to blow.
Why is the circuit overloaded?
There was insufficient lubrication on the bearings, so they locked up.
Why was there insufficient lubrication on the bearings?
The oil pump on the robot is not circulating sufficient oil.
Why is the pump not circulating sufficient oil?
The pump intake is clogged with metal shavings.
Why is the intake clogged with metal shavings?
Because there is no filter on the pump.
You get the point, even though we know thanks to Richard Feynmann that to answer a “why” question at some point we need to rely on the questioner’s previous experience. We usually assume we have to do what we usually do (i.e. perform finite element analysis). But do we? Do we add a filter or do we just replace the fuse?
\medskip
Getting back to the case study: do we need to do FEM analysis? Well, it does not look like we can obtain the stresses of the transient cycles with just pencil and paper. But how much complexity should we add? We might do as little as axisymmetric linear steady-state conservative studies or as much as full three-dimensional non-linear transient best-estimate plus uncertainties computations. And here is where good engineers stand out: in putting their engineering judgement (call it experience or hunches) into defining what to solve. And it is not (just) because the first option is faster to solve than the latter. Involving many complex methods need more engineering time
In the first years of the history of computers, when programs were written in decks and output results were printed in continuous paper sheets, it made sense for computer programs to calculate and write as much data as possible even if it was not needed. One would never know if it would not be needed in the future, and CPU time was so expensive that re-running engineering computations because a particular result was not included in the output was forbidden. But that is not remotely true in the XXI century anymore. Computing time is far cheaper than engineering time (result known as the UNIX Rule of Economy) that it should be neglected with respect to the time spent by a cognisant engineer searching and sorting thousands of hard-to-read floating-point numbers. divert(0)
So we need to address the issue of fatigue in nuclear reactor pipes that
dnl As I wanted to illustrate in [@sec:five], it is very important to decide what kind of problem (actually problems) we should be dealing with.
Since we already agreed there is no way to obtain analytical expressions for the stresses in this general case, we need to employ a numerical scheme to solve the equations. Of course we are choosing the finite element method, but keep in mind that there are a lot of other methods for solving partial differential equations: finite differences, finite volumes, modal methods, etc. Even within finite elements there are many variation, such as displacement-based or mixed formulations, Galerkin or least-squares weighting, and so on. In particular, finite elements compute nodal values (i.e. displacements and stresses at discrete points in space) and then provide a way to interpolate back the results into any other arbitrary point of the domain. If the method is applied correctly, a mesh refinement will lead to improved results---at the cost of needing an exponentially-increasing computing power, measured in both CPU and RAM. In the limit of infinite number of nodes, the FEM results converge to the actual solution of the original PDEs.
:::: {#fig:pipe-linearized}
{#fig:ur width=90%}
Comparison between analytical and FEA results (ref.\ [@pipe-linearized]). ::::
As a nuclear engineer, I learned (theoretically in college but practically after college) that there are some models that let you see some effects and some that let you see other effects. And even if, in principle, it is true that more complex models should allow you to compute more stuff, they definitely might show you nothing at all if the model is so big and complex that it does not fit into a computer cluster (say because it needs hundreds of terabytes of RAM) or because it takes more time to compute than you may have before the final report is expected.
::::: {#fig:pt}
{#fig:pt1 width=95%}
The four (imaginary) transient operational conditions for the case study. :::::
Then we note that we need to solve
i. the transient heat transfer equation to get the temperature distribution within the pipes for all times, ii. the natural frequencies and oscillation modes of the piping system to obtain the pseudo-accelerations generated by the design earthquake, and finally iii. the elastic problem to obtain the stress tensor needed to compute the alternating stress to enter into the fatigue curve.
For each time\ $t$ of the operational (or incidental) transients, the pipes are subject to water flowing with
a. an internal pressure\ $p_i(t)$ that depends on time, b. a certain time-dependent temperature $T_i(t)$ that gives rise to another non-trivial time-dependent temperature distribution\ $T(\mathbf{x},t)$ in the bulk of the pipes.
Also, at those times where the designed earthquake is assumed to occur, there are internal distributed forces\ $\mathbf{f}=\rho \cdot \mathbf{a}$ acting on both the water and the pipes’ steel.
All these effects will give rise to stresses that, if repeated over time, will create and growth microscopic cracks which might end in failure by fatigue. The ASME standard gives a code of conduct on how to estimate this damage, and it starts by asking to define stress classification lines (SCLs). ASME says that they are straight lines that go through a wall of the pipe (or vessel or pump, which is what the ASME code is for) from the inside to the outside and ought to be normal to the iso-stress curves. Stop. Picture yourself a stress field, draw in your head the iso-stress curves (those would be the lines that have the same colour in your picture) and then imagine a set of lines that travel in a perpendicular direction to them. Finally, choose the one that seems the prettiest (which most of the time is the one that seems the easiest). There you go! You now have an SCL. But there is a catch. So far, we have referred to a generic concept of “stress.” Which of the several stresses out there should you picture? One of the three normal, the three shear, von\ Mises, Tresca? Well, actually you would have to imagine tensors instead of scalars. And there might not be such a thing as “iso-stress” curves, let alone normal directions. So pick any radial straight line through the pipe wall at a location that seems relevant and now you are done. In our case study, there will be a few different locations around the material interfaces where high stresses due to differential thermal expansion are expected to occur. Just keep this though with you: it is very important to define where the SCLs are located, as they will define the “quality” of the obtained results.
For the present case study, four SCLs were defined as illustrated in [@fig:scls]: at a distance of three millimetres from the carbon-stainless steel material interface at the valve inlet, on the vertical $x$-$z$ plane, two on each material: two at the bottom and two at the top of the pipes. It is at the internal point of these four SCLs that fatigue resistance is to be assessed.
Let us invoke our imagination once again. Assume in a certain time interval of the transients the temperature of the water inside the pipes fell abruptly from say 250ºC down to 150ºC in a few seconds, stayed cool for half an hour and then went back to 250ºC (similarly to [@fig:pt4]). The internal wall of the pipes would follow the transient temperature (it might be exactly equal or close to it through the Newton’s law of cooling) If the pipe was in a state of uniform temperature, the ramp in the internal wall would start cooling the bulk of the pipe creating a transient thermal gradient. Due to thermal inertia effects, the temperature might have a non-trivial dependence when the ramps started or ended. First try to think and picture it! Then see @fig:valve-temp and the videos referenced at @sec:online. So we need to compute a transient heat transfer problem with convective boundary conditions because any other usual tricks like computing a sequence of steady-state computations for different times would not be able to recover these non-trivial distributions.
Remember the main issue of the fatigue analysis in these systems is to analyse what happens around the location of changes of piping classes where different materials (i.e. different expansion coefficients) are present, potentially causing high stresses due to differential thermal expansion (or contraction) under transient conditions. Therefore, even though we are dealing with pipes we cannot use beam or circular shell elements, because we need to take into account the three-dimensional effects of the temperature distribution along the pipe thickness, let alone to model what happens within the body of the valve.
::: {#fig:mech}
{width=95% #fig:mech-msh}
Unstructured volumetric mesh for the CAD of [@fig:cad-figure]. :::
There is a wonderful essay by Isaac Asimov called “The Relativity of Wrong” [@relativity-wrong] where he introduces the idea that even if something cannot be computed exactly, there are still different levels of error.
When people thought the Earth was flat, they were wrong. When people thought the Earth was spherical, they were wrong. But if you think that thinking the Earth is spherical is just as wrong as thinking the Earth is flat, then your view is wronger than both of them put together.
We can then merge this idea by Asimov with an adapted version of the Saint-Venant’s principle and note that the detailed transient temperature distribution is important only around the location of the SCLs. We can then make an engineering approximation and
::::: {#fig:valve}
{#fig:valve-mesh width=100%}
Reduced mesh around the valve including the carbon-steel nozzle. :::::
{#fig:temp-generalization width=100%}
Note that there is no need to have a one-to-one correspondence between the elements from the reduced mesh with the elements from the original one. Actually, the reduced mesh contains first-order elements whilst the former has second-order elements. Also the grid density is different, yet both of them are locally refined around the material interface. Nevertheless, the finite-element solver Fino---used to solve both the heat and the mechanical problems---allows to read functions of space and time defined over one mesh and continuously evaluate and use them into another one even if the two grids have different elements, orders or even dimensions.
Every nuclear power plant is designed to withstand earthquakes. Of course, not all plants need the same level of reinforcements. Those built in large quiet plains will be, seismically speaking, cheaper than those located in geologically active zones. Keep in mind that all the 54 Japanese nuclear power plants did structurally resist the 2011 earthquake, and all of the reactors were safely shut down. What actually happened in Fukushima is that one hour after the main shake, a 14-metre tsunami splashed on the coast, jumping over the 9-metre defences and flooding the emergency Diesel generators that provided power to the pumps in charge of removing the remaining decay power from the already-stopped reactor core.
divert(-1)
Back to our case study, the point is that each site where nuclear power plants are built must have a geological study where a postulated design-basis earthquake is to be defined. In other words, a theoretical earthquake which the plant ought to withstand needs to be specified. How? By giving a set of three spectra (one for each coordinate direction) giving acceleration as a function of the frequency for each level of the building. That is to say, once the earthquake hits the power plant, depending on soil-structure interactions the energy will shake the building foundations in a way that depends on the characteristics of the earthquake, the soil and the concrete structure. Afterwards, the way the oscillations travel upward and shake each of the mechanical components erected in each floor level depends on the design of the civil structure in a way which is fully determined by the floor response spectra like the ones depicted in\ [@fig:spectrum].
As the earthquake excites some frequencies more than others, it is mandatory to know which are the natural frequencies and modes of oscillations of our piping system. Mathematically, this requires the computation of an eigenvalue problem. Simply stated, we need to find all the non-trivial solutions of the equation
$$ K \phi_i = \lambda_i \cdot M \phi_i $$ where $K$ is the usual finite-element stiffness matrix, $M$ is the mass matrix, $\lambda_i$ is the $i$-th natural frequency of the structure and $\phi_i$ is a vector containing the nodal displacement corresponding to the $i$-th mode of oscillation.
Practically, these problems are solved using the same mechanical finite-element program one would use to solve a standard elastic problem, provided such program supports these kind of problems (Fino does). There are only two caveats we need to take into account:
A real continuous solid has infinite modes of oscillation. A discretised one (using the most common and efficient FEM formulation, the displacement-based formulation) has three times the number of nodes modes. In any case, one is usually interested only in a few of them, namely those with the lower frequencies because they take away most of the energy with them. Each mode has two associated parameters called modal mass and excitation that reflect how “important” the mode is regarding the absorption of energy from an external oscillatory source. Usually a couple dozens of modes are enough to take up more than 90% of the earthquake energy as illustrated by\ [@fig:acumulada].
These first modes, shown in\ [@fig:modes], that take up most of the energy are then used to take into account the earthquake load. There are several ways of performing this computation, but the ASME\ III code states that the method known as SRSS (for Square Root of Sum of Squares) can be used. This method mixes the eigenvectors with the floor response spectra through the eigenvalues and gives a spatial (actually nodal) distribution of three accelerations (one for each direction) that, when multiplied by the density of the material, give a vector of a distributed force (in units of Newton per cubic millimetre for example) which is statically equivalent to the load coming from the postulated earthquake. divert(0)
::::: {#fig:modes}
{width=48%}\
{width=48%}
First natural oscillation modes. Videos available online ([@sec:online]). :::::
divert(-1)
::::: {#fig:acceleration}
{width=80%}
Equivalent accelerations for a certain piping section. ::::: divert(0)
The ASME code says that these accelerations ought to be applied twice: once with the original sign and once with all the elements with the opposite sign. The application of each of these equivalent loads should last two seconds in the original time domain.
divert(-1)
Even though we did not yet discuss it in detail, we want to solve an elastic problem subject to an internal pressure condition, with a non-uniform temperature distribution that leads to both thermal stresses and variations in the mechanical properties of the materials. And as if this was not enough, we want to add during a couple of seconds a statically-equivalent distributed load arising from a design earthquake. This last point means that at the transient instant where the stresses (from the fatigue’s point of view) are maximum we have to add the distributed loads that we computed from the seismic spectra to the other thermal and pressure loads. But we have a linear elastic problem (well, we still do not have it but we will in\ [@sec:break]), so we might be tempted to exploit the problem’s linearity and compute all the effects separately and then sum them up to obtain the whole combination. We may thus compute just the stresses due to the seismic loads and then add these stresses to the stresses at any time of the transient, in particular to the one with the highest ones. After all, in linear problems the result of the sum of two cases is the result of the sum of the cases, right? Not always.
Let us jump out of our nuclear piping problem and step back into the general finite-element theory ground for a moment (remember we were going to jump back and forth). Assume you want to know how much your dog weighs. One thing you can do is to weigh yourself (let us say you weigh 81.2\ kg), then to grab your dog and to weigh both yourself and your dog (let us say you and your dog weigh 87.3\ kg). Would you swear your dog weighs 6.1\ kg plus/minus the scale’s uncertainty? I can tell you that the weight of two individual protons and two individual neutrons in not the same as the weight of an\ $\alpha$ particle. Would not there be a master-pet interaction that renders the weighting problem non-linear?
\medskip
Time for both of us to make an experiment. Grab your favourite FEM program for the first time (remember mine is CAEplex, which can also be accessed through Onshape) and create a 1mm $\times$ 1mm $\times$ 1mm cube. Set any values for the Young’s Modulus and Poisson ratio as you want. I chose\ $E=200$\ GPa and\ $\nu=0.28$. Restrict the three faces pointing to the negative axes to their planes, i.e.
Now we are going to create and compare three load cases:7
a. Pure normal loads (https://caeplex.com/p/d8fe) b. Pure shear loads (https://caeplex.com/p/b494) c. The combination of A & B (https://caeplex.com/p/9899)
The loads in each cases are applied to the three remaining faces, namely “right” ($x>0$), “back” ($y>0$) and “top,” ($z>0$). Their magnitude in Newtons are:
dnl ::::::{=latex} dnl \rowcolors{2}{black!10}{black!0} dnl ::::::
| “right” | “back” | “top” | |||||||
|---|---|---|---|---|---|---|---|---|---|
| $F_x$ | $F_y$ | $F_z$ | $F_x$ | $F_y$ | $F_z$ | $F_x$ | $F_y$ | $F_z$ | |
| Case A | +10 | 0 | 0 | 0 | +20 | 0 | 0 | 0 | +30 |
| Case B | 0 | +15 | -15 | +25 | 0 | -5 | -15 | +25 | 0 |
| Case C | +10 | +15 | -15 | +25 | +20 | -5 | -15 | +25 | +30 |
In the first case, the principal stresses are uniform and equal to the three normal loads. As the forces are in Newton and the area of each face of the cube is 1\ mm$^2$, the usual sorting leads to
$$ \sigma{1A} = 30~\text{MPa} $$ $$ \sigma{2A} = 20~\text{MPa} $$ $$ \sigma_{3A} = 10~\text{MPa} $$
::::: {#fig:cube}
{#fig:cube-shear width=65%}
Spatial distribution of principal stress\ 3 for cases\ B and\ C. If linearity applied, case\ C would be equal to case\ B plus a constant. :::::
In the second case, the principal stresses are not uniform and have a non-trivial distribution. Indeed, the distribution of\ $\sigma_3$ obtained by CAEplex is shown in\ [@fig:cube-shear]. Now, if we indeed were facing a fully linear problem then the results of the sum of two inputs would be equal to the sum of the individual inputs. And\ [@fig:cube-full], which shows the principal stress\ 3 of case\ C is not the result from case\ B plus any of the three constants from case\ A. Had it been, the colour distribution would be exactly the same as the scale goes automatically from the most negative value in blue to the most positive value in red. And 7+30\ $\neq$ 33. Alas, it seems that there exists some kind of unexpected non-linearity (the feared master-pet interaction?) that prevents us from from fully splitting the problem into simpler chunks.
\medskip
So what is the source of this unexpected non-linear effect in an otherwise nice and friendly linear formulation? Well, probably you already know it because after all it is almost high-school mathematics. But I learned long after college, when I had to face a real engineering problem and not just back-of-the-envelope pencil-and-paper trivial exercises.
Recall that principal stresses are the eigenvalues of the stress tensor. And the fact that in a linear elastic formulation the stress tensor of case\ C above is the sum of the individual stress tensors from cases\ A and B does not mean that their eigenvalues can be summed (think about it!). Again, imagine the eigenvalues and eigenvectors of cases A & B. Got it? Good. Now imagine the eigenvalues and eigenvectors for case\ C. Should they sum up? No, they should not! Let us make another experiment, this time with matrices using Octave or whatever other matrix-friendly program you want (try to avoid black boxes as explained in\ [@sec:two-materials]).
First, let us create a 3 $\times$ 3 random matrix $R$ and then multiply it by its transpose\ $R^T$ to obtain a symmetric matrix\ $A$ (recall that the stress tensor from [@sec:tensor] is symmetric):
octave> R = rand(3); A = R*R'
A =
2.08711 1.40929 1.31108
1.40929 1.32462 0.57570
1.31108 0.57570 1.09657
Do the same to obtain another 3 $\times$ 3 symmetric matrix\ B:
octave> R = rand(3); B = R*R'
B =
1.02619 0.73457 0.56903
0.73457 0.53386 0.37772
0.56903 0.37772 0.53141
Now compute the sum of the eigenvalues first and then the eigenvalues of the sum:
octave> eig(A)+eig(B)
ans =
0.0075113
0.8248395
5.7674016
octave> eig(A+B)
ans =
0.049508
0.782990
5.767255
Did I convince you? More or less, right? The third eigenvalue seems to fit. Let us not throw all of our beloved linearity away and dig in further into the subject. There are still two important issues to discuss which can be easily addressed using fresh-year linear algebra (remember not to fear maths!). First of all, even though principal stresses are not linear with respect to the sum they are linear with respect to pure multiplication. Once more, think what happens to the the eigenvalues and eigenvectors of a single stress tensor as all its elements are scaled up or down by a real scalar. They are the same! So, for example, the Von\ Mises stress (which is a combination of the principal stresses) of a beam loaded with a force\ $\alpha \cdot \mathbf{F}$ is\ $\alpha$ times the stress of the beam loaded with a force\ $\mathbf{F}$. Please test this hypothesis by playing with your favourite FEM solver. Or even better, take a look at the stress invariants $I_1$, $I_2$ and $I_3$ (you can search online or peek into the source code of Fino, grep for the routine called fino_compute_principal_stress()) and see (using paper and pencil!) how they scale up if the individual elements of the stress tensor are scaled by a real factor\ $\alpha$.
The other issue is that even though in general the eigenvalues of the sum of two matrices are not the same as the eigenvalues of the matrix sum, there are some cases when they are. In effect, if two matrices\ $A$ and\ $B$ commute, i.e. their product is commutative
$$ A \cdot B = B \cdot A $$ then the sums (in plural because there are three eigenvalues) of their eigenvalues are equal to the eigenvalues of the sums. In order for this to happen, both\ $A$ and\ $B$ need to be diagonalisable using the same basis. That is to say, the diagonalising matrix\ $P$ such that $P^{-1} A P$ is diagonal should be the same that renders\ $P^{-1} B P$ also diagonal. What does this mechanically mean? Well, if case\ A has loads that are somehow “independent” from the ones in case\ B, then the principal stresses of the combination might be equal to the sum of the individual principal stresses. A notable case is for example a beam that is loaded vertically in case\ A and horizontally in case\ B. I dare you to grab your FEM program one more time, run a test and picture the eigenvalues and eigenvectors of the three cases in your head.
\medskip
The moral of this fable is that if we have a case that is the combination of two other simpler cases (say one with only surface loads and one with only volumetric loads), in general we cannot just add the principal stresses (or Von Mises) of two cases and expect to obtain a correct answer. We have to solve the full case again (both the surface and the volumetric loads at the same time). In case we are stubborn enough and still want to stick to solving the cases separately, there is a trick we can resort to. Instead of summing principal stresses, what we can do is to sum either displacements or the individual stress components, which are fully linear. So we might pre-deform (or pre-stress) case B with the results from case A and then expect the FEM program to obtain the correct stresses for the combined case. However, this scheme is actually far more complex than just solving the combined case in a single run and it also needs an educated guess and/or trial and error to know at what time the pre-deformation or pre-stressing should be applied to take into account the seismic load. divert(0)
divert(-1) After discussing linearity, let us now dig into linearisation. The name is similar but these two animals are very different beasts.divert(0) We said in\ [@sec:case] that the ASME Boiler and Pressure Vessel Code was born long before modern finite-elements methods were developed and of course being massively available for general engineering analysis (is the word democratised?). Yet the code provides a comprehensive, sound and---more importantly---widely and commonly-accepted body of knowledge such that the regulatory authorities require its enforcement to nuclear plant owners.
One of the main issues of the ASME code refers to what is known as “membrane” and “bending” stresses. These are defined in section\ VIII annex 5-A, although they are generally used in other sections, particularly section\ III. divert(-1)Briefly, they give the zeroth-order (membrane) and first-order (bending) moments of the stress distribution along a so-called Stress Classification Line or SCL, which should be chosen depending on the type of problem under analysis. divert(0)
Briefly, the membrane stress gives a measure of the internal stresses that are needed to balance the external loads. The bending stress gives a measure of the internal stresses which are not self-balanced and arise due to the external load operating on the solid under study. These two measures are computed along the already-introduced Stress Classification Lines. There are very interesting physical interpretations of what these stresses mean. They are beyond the scope of this chapter but reference [@angus-linearization] provides a very interesting study on the subject.
The computation of these membrane and bending stresses is called “stress linearisation” because it is like computing the Taylor expansion (or to differentiate between balanced and non-balanced stresses, expansion in Legendre polynomials) of an arbitrary stress distribution along a line, and retaining the first two terms. That is to say, to obtain a linear approximation of the fully-detailed stress distribution along the SCL. As for the ASME requirements, they are a way of having the average plus standard-deviation contributions of a certain stress distribution along the pipe’s wall thickness.
divert(-1) \medskip
Now the optional (but recommended) mathematical details. According to ASME\ VIII Div.\ 2 Annex\ 5-A, the expression for computing the $i$-$j$-th element of the membrane tensor is
$$ \text{M}_{ij} = \frac{1}{t} \cdot \int0^t \sigma{ij}(t^\prime) \, dt^\prime $$ where $t$ is the length and $0<t^\prime<t$ is the parametrisation variable of the SCL. The other linearised stress, namely the bending stress tensor $\text{B}$ is
$$ \text{B}_{ij} = \frac{6}{t^2} \cdot \int0^t \sigma{ij}(t^\prime) \cdot \left( \frac{t}{2}-t^\prime\right) \, dt^\prime $$
For the fatigue assessment, it is the sum\ $\text{MB}$ that measures the stress
$$ \text{MB}{ij} = \text{M}{ij} \pm \text{B}_{ij} $$ where the sign should be taken such that the resulting stress intensity (i.e. Tresca, Von Mises, Principal\ 1, etc.) represents the worst-case scenario. Older versions of ASME\ VIII preferred the Tresca criterion, while newer versions switched to Von\ Mises. Nevertheless, ASME\ III still uses Tresca.
In any case, $\text{MB}_{ij}$ should be taken as a measure of the primary stress at the internal surface of the pipe. In effect, let us assume we have a stress distribution\ $\sigma(t^\prime)$ along a certain line of length $t$ such that\ $\sigma(0)$ and $\sigma(t)$ are the stresses at the internal and external surfaces, respectively. The primary stress distribution, i.e. the stress that is not self balancing will be of the simple linear form
$$ \sigma(t^\prime) = a\cdot t^\prime + b $$ because any other higher-order polynomial term would self-balance (exercise: prove it). The membrane and bending stresses, according to ASME would then be
$$ \text{M} = \frac{1}{t} \int{0}^{t} \big[ a\cdot t^\prime + b \big] \, dt^\prime = \frac{1}{2} \cdot a \cdot t + b $$ $$ \text{B} = \frac{6}{t^2} \int{0}^{t} \big[ a\cdot t^\prime + b \big] \cdot \left( \frac{t}{2} - t^\prime \right) \, dt^\prime = -\frac{1}{2} \cdot a \cdot t $$
In effect, we can see that $$ \sigma(0) = b = \text{M} + \text{B} $$ and
$$ \sigma(t) = a \cdot t + b = \text{M} - \text{B} $$
No need to know or even understand these integrals which for sure are not introduced to students in regular college courses. But it would be good to, as linearisation is a cornerstone subject for any serious mechanical analysis of pressurised components following the code. So get a copy of ASME sec.\ VIII div.\ 2 annex\ 5-A and then search online for for “stress linearisation” (or “linearization”).
divert(0)
divert(-1)
Let us now make a (tiny) step from the general and almost philosophical subject from the last section down to the particular case study, and reconsider the infinite pressurised pipe once again. It is time to solve the problem with a computer using finite elements and to obtain some funny coloured pictures instead of just equations (like we did in [@sec:infinite-pipe]).
The first thing that has to be said is that, as with any interesting problem, there are literally hundreds or different ways of solving it, each of them throwing particular conclusions. For example, one can:
::::: {#fig:quarter}
{#fig:cube-struct width=50%}\
{#fig:quarter-caeplex width=50%}
Two of the hundreds of different ways the infinite pressurised pipe can be solved using FEM. The axial displacement at the ends is set to zero, leading to a plane-strain condition :::::
You can get both the exponential nature of each added bullet and how easily we can add new further choices to solve a FEM problem. And each of these choices will reveal you something about the nature of either the mechanical problem or the numerical solution. It is not possible to teach any possible lesson from every outcome in college, so you will have to learn them by yourself getting your hands at them. I have already tried to address the particular case of the infinite pipe in a recent report8 that is worth reading before carrying on with this article. The main conclusions of the report are:
dnl * Engineering problems ought not to be solved using black-boxes (i.e. privative software whose source code is not freely available)---more on the subject below in\ [@sec:two-materials]. dnl * The pressurised infinite pipe has only one independent variable (the radius $r$) and one primary dependent variable (the radial displacement $u_r$). dnl * The problem has analytical solution for the radial displacement\ $u_r$ and the radial\ $\sigmar$, tangential\ $\sigma\theta$ and axial\ $\sigma_z$ stresses. dnl * There are no shear stresses, so these three stresses are also the principal stresses. dnl * Analytical expressions for the membrane and membrane plus bending stresses along any radial SCL can be obtained. dnl * The spatial domain can be discretised using linear or higher-order elements. In particular first and second-order elements have been used in the report.
::::: {#fig:error-vs-cpu}
{#fig:error-M-vs-cpu width=49%}
{#fig:error-MB-vs-cpu width=49%}
Error in the computation of the linearised stresses vs. CPU time needed to solve the infinite pipe problem using the finite element method. :::::
An additional note should be added. The FEM solution, which not only gives the nodal displacements but also a method to interpolate these values inside the elements, does not fully satisfy the original equilibrium equations at every point (i.e. the strong formulation). It is an approximation to the solution of the weak formulation that is close (measured in the vector space spanned by the shape functions) to the real solution. Mechanically, this means that the FEM solution leads only to nodal equilibrium but not point-wise equilibrium.
divert(-1)
dnl The last two bullets above lead to an issue that has come many times when discussing the issue of convergence with respect to the mesh size with other colleagues. There apparently exists a common misunderstanding that the number of elements is the main parameter that defines how complex a FEM model is. This is strange, because even in college we are taught that the most important parameter is the size of the stiffness matrix, which is three times (for 3D problems with the displacement-based formulation formulation) the number of nodes.
dnl Let us pretend we are given the task of comparing two different FEM programs. So we solve the same problem in each one and see what the results are. I have seen many times the following situation: the user loads the same geometry in both programs, run the meshing step in both of them so that the number of elements is more or less them same (because she wants to be “fair”) and then solves the problem. Voilà! It turns out that the first program defaults to first-order elements and the second one to second-order elements. So if the first one takes one minute to obtain a solution, the second one should take nearly four minutes. How come that is a fair comparison? Or it might be the case that one program uses tetrahedra while the other one defaults to hexahedra. Or any other combination. In general, there is no single problem parameter that can be fixed to have a “fair” comparison, but if there was, it would definitely be the number of nodes rather than the number of elements. Let us see why.
dnl \medskip
Fire up your imagination again and make a thought experiment in which you have to compare say a traditional FEM approach with a new radical formulation that a crazy mathematician from central Asia came up with claiming it is a far superior theory than our beloved finite elements (or for the case, any other formulation from\ [@sec:formulations]). How can we tell if the guy is really a genius or purely nuts? Well, we could solve a problem which we can compute the analytical solution (for example the infinite pipe from\ [@sec:infinite-pipe]) first with the traditional method ([@sec:infinite-pipe-fem]) and then with the program which uses the new formulation. Say the traditional FEM gives an error between 1% and 5% running in a few seconds depending on the mesh size. The new program from the crazy guy takes no input parameters and gives an error of 0.1%, but it takes one week of computation to produce a result. Would you say that the new radical formulation is really far superior?
What I would do is to run a FEM program that takes also one week to compute, and only then compare the errors. So that is why\ [@fig:error-vs-cpu] uses the CPU time in the abscissas rather than the number of elements to compare first and second-order formulations.
To fix ideas, let us stick to a linear elastic FEM problem. The CPU time needed to completely solve such a problem can be divided into four steps:
The effort needed to compute a discretisation of a continuous domain depends on the meshing algorithm. But nearly all meshers first put nodes on the edges (1D), then on the surfaces (2D) and finally on the volumes (3D). Afterwards, they join the nodes to create the elements. Depending on the topology (i.e. tetrahedra, hexahedra, pyramids, etc) and the order (i.e. linear, quadratic, etc.) this last step will vary, but the main driver here is the number of nodes. Try measuring the time needed to obtain grids of different sizes and kinds with your mesher.
The stiffness matrix is a square matrix that has\ $NG$ rows and\ $NG$ columns where $N$ is the number of nodes and $G$ is the number of degrees of freedom per node, which for three-dimensional problems is $G=3$. But even though FEM problems have to build a $NG\times NG$ matrix, they usually sweep through elements rather than through nodes, and then scatter the elements of the elemental matrices to the global stiffness matrix. This is called the assembly of the matrix. So the effort needed here depends again on how the solver is programmed, but it is a combination of the number of elements and the number of nodes.
For a fixed number of nodes, first-order grids have far more elements than second-order grids because in the first case each node has to be a vertex while in the latter half will be vertexes and half will be located at the edges (think!). So the sweep is larger for linear grids. But the effort needed to integrate quadratic shape functions is greater than for the linear case, so these two effects almost cancel out.
The linear FEM problem leads of course to a system of\ $NG$ linear equations, cast in matrix form by the stiffness matrix\ $K$ and a right-hand size vector\ $\mathbf{b}$ containing the loads (both volumetric and the ones at the surfaces from the boundary conditions):
$$K \cdot \mathbf{u} = \mathbf{b}$${#eq:kub}
The objective of the solver is to find the vector\ $\mathbf{u}$ of nodal displacements that satisfy the momentum equilibrium. Luckily (well, not purely by chance but by design) the stiffness matrix is almost empty. It is called a sparse matrix because most of its elements are zero. If it was fully filled, then a problem with just 100k nodes would need more than 700\ Gb of RAM just to store the matrix elements, rendering FEM as practically impossible. And even though the stiffness matrix is sparse, its inverse is not so we cannot solve the elastic problem by “inverting” the matrix. Particular methods to represent and more importantly to solve linear systems involving these kind of matrices have been developed, which are the methods used by finite-element (and the other finite-cousins) programs. In general there are two approaches
::::: {#fig:test}
{#fig:test1 width=48%}\
{#fig:test2 width=48%}
Structure of the stiffness matrices for the same FEM problem with 10k nodes. Red (blue) are positive (negative) elements. :::::
In a similar way, different types of elements will give rise to different sparsity patterns which change the effort needed to solve the problem. In any case, the base parameter that controls the problem size and thus provides a basic indicator of the level of difficulty the problem poses is the number of nodes. Again, not the number of elements, as the solver does not even know if the matrix comes from FEM, FVM or FDM.
In the displacement-based formulation, the solver finds the displacements\ $\mathbf{u}(\mathbf{x})$ that satisfy [@eq:kub], which are the principal unknowns. But from\ [@sec:tensor] we know that we actually have solved the problem after we have the stress tensors at every location\ $\mathbf{x}$, which are the secondary unknowns. So the FEM program has to compute the stresses out of the displacements. It first computes the strain tensor, which is composed of the nine partial derivatives of the three displacements with respect to the three coordinates. Then it computes the stress tensor (already introduced in\ [@sec:tensor]) using the materials’ strain-stress constitutive equations which involve the Young’s Modulus\ $E$, the Poisson ratio\ $\nu$ and the spatial derivatives of the displacements\ $\mathbf{u}=[u,v,w]$. This sounds easy, as we (well, the solver) knows what the shape functions are for each element and then it is a matter of computing nine derivatives and multiplying by something involving\ $E$ and\ $\nu$. Yes, but there is a catch. As the displacements\ $u$, $v$ and\ $w$ are computed at the nodes, we would like to also have the stresses at the nodes. However,
i. the displacements\ $\mathbf{u}(\mathbf{x})$ are not differentiable at the nodes, and ii. if the node belongs to a material interface, neither\ $E$ nor\ $\nu$ are defined.
::::: {#fig:derivatives}
{#fig:slab-1-0 width=48%}\
{#fig:slab-1-1 width=48%}
{#fig:slab-2-0 width=48%}\
{#fig:slab-2-1 width=48%}
Solution of a problem using FEM using eight linear/quadratic uniform/non-uniform elements. The reference solution is a cosine. Plain averaging works for uniform grids but fails in the non-uniform cases. :::::
Now proceed to picturing the general three-dimensional cases with unstructured tetrahedra. What is the derivative of the displacement\ $v$ in the\ $y$ direction with respect to the $z$ coordinate at a certain node shared by many tetrahedra? What if one of the elements is very small? Or it has a very bad quality (i.e. it is deformed in one direction) and its derivatives cannot be trusted? Should we still average? Should this average be weighted? How?
Detailed mathematics show that the location where the derivatives of the interpolated displacements are closer to the real (i.e. the analytical ones in problems that have it) solution are the elements’ Gauss points. Even better, the material properties at these points are continuous (they are usually uniform but they can depend on temperature for example) because, unless we are using weird elements, there are no material interfaces inside elements. But how to manage a set of stresses given at the Gauss points instead of at the nodes? Should we use one mesh for the input and another one for the output? What happens when we need to know the stresses on a surface and not just in the bulk of the solid? There are still no one-size-fits-all answers. There is a very interesting blog post by Nick Stevens that addresses the issue of stresses computed at sharp corners. What does your favourite FEM program do with such a case?
In any case, this step takes a non-negligible amount of time. The most-common approach, i.e. the node-averaging method is driven mainly by the number of nodes of course. So all-in-all, these are the reasons to use the number of nodes instead of the numbers of elements as a basic parameter to measure the complexity of a FEM problem.
Let us review some issues that appear when solving our case study and that might not have been thoroughly addressed back during our college days.
::::: {#fig:two-cubes}
{#fig:two-cubes2 width=48%}\
{#fig:two-cubes4 width=48%}
Two cubes of different materials share a face and a pressure is applied at the right-most face. :::::
To simplify the discussion that follows, let us replace for one moment the full $3 \times 3$ tensor and the nine partial derivatives of the displacement by just one strain\ $\epsilon$ and let the linear elastic strain-stress relationship to be the simple scalar expression
$$ \sigma = E \cdot \epsilon $$
Faced with the problem of computing the stress\ $\sigma$ at one node shared by many elements, we (actually our favourite FEM program) might:
There might be other choices as well. Do you know what your favourite FEM program does? Now follow up with these questions:
a. Does the manual say what it does? b. Does the manual say how it does what what it does? c. Does it provide the user (i.e. you) with different choices? d. Can you tell what these options entail? e. …
You can still add a lot of questions that you should be having right now.
divert(-1) If you cannot get a clear answer for at least one of them, then start to worry. After you do, add the following question:
Do you believe your favourite FEM program’s manual?
What we as responsible engineers who have to sign a report stating that a nuclear power plant will not collapse due to fatigue in its pipes, is to fully understand what is going on with our stresses. Richard Stallman says that the best way to solve a problem is to avoid it in the first place. In this case, we should avoid having to trust a written manual and rely on software whose source code is available. What we need is the capacity (RMS calls it freedom) to be able to see the detailed steps performed by the program so we can answer any question we (or other people) might have.
Without resorting into philosophical digressions about the difference between free and open-source software (not because it is not worth it, but because it would take a whole book), the programs that make their source code available for their users are called open-source software. If the users can also modify and re-distribute the modified versions, they are called free software. Note that the important concept here is freedom, not price. In Spanish (my native language) it would have been easier because there are two separate words for free as in freedom (“libre”) and for free as in price (“gratis”).
In effect, a couple of years ago Angus Ramsay noted a weird behaviour in the results given by a certain commercial non-free FEA software regarding the handling of expansion coefficients from ASME data. To understand what was going on, Angus and I had to guess what the program was doing to reproduce the allegedly weird results. Finally, it was a matter how the data was rounded up to be presented in a paper table rather than a programming flaw. Nevertheless, we were lucky our guesses lead us to a reasonable answer. If we had access to the program’s source code, we could have thoroughly analysed the issue in a more efficient way. Sure, we might not have the same programming skills the original authors of the software have, but if it had been free software we would have had the freedom to hire a programmer to help us out. That is what free means. In Eric Raymond’s words, “given enough eyeballs, all bugs are shallow.” This is rather important in engineering software where verification and validation is a must, especially in regulated fields like the nuclear industry. First, think how can a piece of software be verified if the source code is not available for independent analysis.
So now, ask yourself another question:
Do you trust your favourite FEM program?
Back to the two-material problem, all the discussion above in\ [@sec:two-materials] about non-continuous derivatives applies to a sharp abrupt interface. In the study case the junctions are welded so there is a heat-affected zone with changes in the material micro structure. Therefore, there exists a smooth transition from the mechanical properties of one material to the other one in a way that is very hard to predict and to model. In principle, the assumption of a sharp interface is conservative in the sense that it is expected the computed stresses to be larger than the actual ones. There cannot be an SCL exactly on a material interface so there should be at least two SCLs, one at each side of the junctions as\ [@fig:weldolet-scls] illustrates. The actual distance would have to be determined first as an educated guess, then via trial and error and finally in accordance with the regulator. divert(0)
divert(-1)
Time for another experiment. We know (more or less) what to expect from an infinite pressurised pipe from\ [@sec:infinite-pipe]. What if we added a branch to such pipe? Even more, what if we successively varied the diameter of the branch to see what happens? This is called parametric analysis, and sooner or later (if not now) you will find yourself performing this kind of computations more often than any other one.
divert(-1) So here come the five Feynmann-Ohno questions:
Why do you want to perform a parametric computation?
So we can study how the linearised stresses change with the inclusion of a branch of a certain diameter (and because we can, using the PARAMETRIC keyword in Fino).
Why do you want to know how the stresses change with the inclusion of a branch?
Because, if we have the time, it is worth to do something harder than originally asked (football joke: it is like training throw-ins with watermelons during the week so you can reach the area with a regular ball on the weekends).
Why do you want to do something harder?
Because getting out of our comfort zone once in a while is a healthy habit.
Why is getting out of our comfort zone a healthy habit?
Because it fires up a certain part of our brains that keep us active and allow us to better understand what is it that is going on with the problem we are solving.
Why do you want to understand what is going on?
For the same reason you are now reading this chapter.
divert(-1)
::::: {#fig:tee-geo}
{#fig:tee-geo1 width=54%}\
{#fig:tee-geo2 width=44%}
Geometry of the parametric 12-inch tee for the particular case of a 4-inch branch :::::
The boundary conditions are
::::: {#fig:tee-scls}
{#fig:tee-scls1 width=29%}\
{#fig:tee-scls2 width=67%}
::::: {#fig:tee-MB}
{#fig:M width=48%}\
{#fig:B width=48%}
Parametric stresses as a function of the nominal diameter\ $d_b$ of the branch. :::::
::::: {#fig:tee-post}
{#fig:tee-post-2 width=30%}\
{#fig:tee-post-5 width=30%}\
{#fig:tee-post-10 width=30%}
Von\ Mises stress and 400x warped displacements for three values of\ $d_b$. :::::
[@Fig:tee-post] illustrates how the pipes deform when subject to the internal pressure. When the branch is small, the problem resembles the infinite-pipe problem where the main pipe expands radially outward and there is only traction. For large values of\ $d_b$, the pressure in the branch bends down the main pipe, generating a complex mixture of traction and compression. The tipping point seems to be around a branch diameter\ $d_b\approx 5$\ in.
Do you now see the added value of training throw-ins with watermelons? We might go on…
Most of the time at college we would barely do what is needed to be approved in one course and move on to the next one. If you have the time and consider a career related to finite-element analysis, please do not. Clone the repository ([@sec:online]) with the input files for Fino and start playing. If you are stuck, do not hesitate asking for help in wasora’s mailing list.
One further detail: it is always a sane check to try to explain the numerical results based on physical reasoning (i.e. “with your fingers”) as we did two paragraphs above. Most of the time you will be solving problems whilst already knowing what the result would (or ought to) be.
divert(0)
A fellow mechanical engineer who went to the same high school I did, who went to the same engineering school I did and who worked at the same company I did, but who earned a PhD in Norway once told me two interesting things about finite-elements graduate courses. First, that in Trondheim the classes were taught by faculty from the the mathematics department rather than from the mechanical engineering department. It made complete sense to me, as I always have thought finite elements mainly as a maths subject. And even though some engineers might know some maths, it is nothing compared to actual mathematicians. Secondly, that they called the thermal, natural oscillations and elastic problems as the rhyming trio “bake, shake and break” (they also had “wake” for fluids, but that is a different story). These are just the three problems listed in section\ [@sec:piping-nuclear] that we need to solve in our nuclear power plant.
So here we are again with the case study where we have to compute the linearised stresses at certain SCLs located near the interface between two different kinds of steels during operational and incidental transients of the plant. The first part is then to “bake” the pipes, modelling a thermal transient with time-dependent temperature boundary conditions on the inner surface of the pipes following the postulated transients from [@fig:pt]. This steps gives a time and space-dependent temperature\ $T(x,y,z,t)$ within the bulk of the pipes.
We then proceed to “shake” the pipes. That is to say, we obtain a distributed load vector\ $\mathbf{f}(x,y,z)$ which is statically equivalent to the design earthquake.
:::: {#fig:MB-scl}
{#fig:MB-scl-1}
Juxtaposition of the linearised MB principal stresses at two SCLs. ::::
Is the last bullet right? Surely you’re joking, Mr.\ Theler! Linear problems are simple and almost useless. How can this extremely complex problem be linear? Well, let us see. First, there are two main kinds of non-linearities in FEM:
The first one is easy. Due to the fact that the pipes are made of steel, it is expected that the actual deformations are relatively small compared to the original dimensions. This leads to the fact that the mechanical rigidity (i.e. the stiffness matrix\ $K$) does not change significantly when the loads are applied. Therefore, we can safely assume that the problem is geometrically linear.
Let us now address material non-linearities. On the one hand we have the temperature-dependent issue. According to ASME\ II part\ D, what depends on temperature\ $T$ is the Young’s Modulus\ $E$. But the stress-strain relationship is still
$$ \sigma = E(T) \cdot \epsilon $$
What changes with temperature is the slope of\ $\sigma$ with respect to\ $\epsilon$ (think and imagine!), but the relationship between them is still linear.
On the other hand, we have a non-trivial temperature distribution\ $T(\mathbf{x}, t)$ within the pipes that is a snapshot of a transient heat conduction problem at a certain time\ $t$ (think and picture yourself taking photos of the temperature distribution changing in time and obtaining something like [@fig:valve-temp]). Let us now forget about the time, as after all we are solving a quasi-static elastic problem. Now you can trust me or ask a FEM teacher, but the continuous displacement formulation can be loosely written as
$$ K\big[E\left(T(\mathbf{x})\right), \mathbf{x}\big] \cdot \mathbf{u}(\mathbf{x}) = \mathbf{b}(\mathbf{x})$$
If you notice, the complex dependence of the stiffness matrix\ $K$ can be reduced to just the spatial vector\ $\mathbf{x}$:
$$ K(\mathbf{x}) \cdot \mathbf{u}(\mathbf{x}) = \mathbf{b}(\mathbf{x})$$
And this last expression is linear in\ $\mathbf{u}$! In effect, the spatial discretisation scheme used in the finite-element method is based on integration over\ $\mathbf{x}$ in the problem domain (i.e. the geometry). As\ $K$, $\mathbf{u}$ and\ $\mathbf{b}$ depend only on\ $\mathbf{x}$, then after integration one gets just numbers inside $K$ and $\mathbf{b}$ which do not depend on the unknown displacements\ $\mathbf{u}(\mathbf{x})$. Again, you can either, in increasing order of recommendation:
To recapitulate, the steps discussed so far include
A pretty nice list of steps, which definitely I would not have been able to tackle when I was in college. Would you?
Strictly speaking, finite elements are not needed anymore at this point of the analysis. But even though we are (or want to be) FEM experts, we have to understand that if the objective of a work is to evaluate fatigue (or fracture mechanics or whatever), finite elements are just a mean and not and end. If we just mastered FEM and nothing else, our field of work would be narrow and bounded. We need to use all of our computational knowledge to perform actual engineering tasks and to be able to tell our bosses and/or clients whether the pipe will fail or not. This important hint is indirectly induced in college but it is definitely reinforced afterwards when working with actual clients and bosses.
Another comment I would like to add is that I had to learn fatigue practically from scratch when faced with this problem for the first time in my engineering career. I remembered some basics from college (like the general introduction from [@sec:fatigue]), but I lacked the skills to perform a real computation by myself. Luckily there still exist books, there are a lot of interesting online resources (not to mention Wikipedia) and, even more luckily, there are plenty of other fellow engineers that are more than eager to help you. My second hint in this section is: when faced to a new challenging problem, read, learn and ask for guidance to real people to see if you got what you read right.10
\medskip
Back and distantly, in\ [@sec:case] we said that people noticed there were some environmental factors that affected the fatigue resistance of materials, which is exactly what happens in our case study: the internal faces of the piping system are in contact with water. Even more, possibly heavy water. The basic ASME approach does not take care of these factors, and it is regarded as fatigue “in air.” We are interested in taking them into account, so we follow the US\ Nuclear Regulatory Commission guidelines to evaluate fatigue “in water” [@nrc].
We already said in\ [@sec:fatigue] that the stress-life fatigue assessment method gives the limit number\ $N$ of cycles that a certain mechanical part can withstand when subject to a certain periodic load of stress amplitude\ $S\text{alt}$. If the actual number of cycles\ $n$ the load is applied is smaller than the limit\ $N$, then the part is fatigue-resistant. In our case study there is a mixture of several periodic loads, each one expected to occur a certain number of times. ASME’s way to evaluate the resistance is first to build a juxtaposed stress history from all transients under consideration and then to break it up into partial stress amplitudes\ $S{\text{alt},j}$ between a “valley” and a “peak.” Each valley-peak pair\ $j$ is assigned an individual usage factor\ $U_j$ defined as
$$U_j = \frac{n_j}{N_j}$$
Under the assumption that Miner’s rule (a.k.a. Palmgreen’s rule) holds, the overall cumulative usage factor is then the algebraic sum of the partial contributions [@schijve]:
$$\text{CUF} = U_1 + U_2 + \dots + U_j + \dots$$
When\ $\text{CUF} < 1$, the part under analysis can withstand the proposed cyclic operation. Now, if the valley of the partial stress amplitude corresponds to one transient and the peak to another one, then the following note in ASME III’s NB-3224(5) should be followed:
In determining $n_1$, $n_2$, $n_3$, $\dots$, $n_j$ consideration shall be given to the superposition of cycles of various origins which produce a total stress difference range greater than the stress difference ranges of the individual cycles. For example, if one type of stress cycle produces 1,000 cycles of a stress difference variation from zero to +60,000\ psi and another type of stress cycle produces 10,000 cycles of a stress difference variation from zero to −50,000\ psi, the two types of cycle to be considered are defined by the following parameters:
(a) for type 1 cycle, $n1 =$ 1,000 and $S{\text{alt},1} = (60,000 + 50,000)/2$; (b) for type 2 cycle, $n2 =$ 9,000 and $S{\text{alt},2} = (50,000 + 0)/2$.
This cryptic paragraph is a clear example of stuff that cannot be learned at college. No matter how good your university is, there is no way to cover all theories and methodologies which a mechanical engineer could need in his or her professional life.
Let us start by taking into account the juxtaposed stress histories from [@fig:MB-scl-1]. ASME\ NB-3216 requires to take the $\text{MB}_1-\text{MB}3$ difference with respect to the initial stress so as to start with a zero value. This differential stress history $\Delta \text{MB}^{\prime}{31}$, which is shown in [@fig:extrema-1] for the SCL\ #1 along with the temperature and pressure transients for reference, is used to evaluate fatigue resistance “in air” as follows.
Extrema of the juxtaposed stress history of [@fig:extrema-1] {#tbl:extrema} divert(0)
Extrema of the juxtaposed stress history of [@fig:extrema-1] {#tbl:extrema} divert(0)
-298.5 | #1 | 250 | min |
Extrema of the juxtaposed stress history of [@fig:extrema-1] {#tbl:extrema} divert(0)
Extrema of the juxtaposed stress history of [@fig:extrema-1] {#tbl:extrema} divert(0)
-199.5 | #4 | 100 | min |
Extrema of the juxtaposed stress history of [@fig:extrema-1] {#tbl:extrema} divert(0)
Extrema of the juxtaposed stress history of [@fig:extrema-1] {#tbl:extrema} divert(0)
\rowcolors{1}{black!0}{black!10}
\begin{table}
\begin{center}
\begin{tabular}{
c
S[table-format=3.1] S[table-format=3.1]
c
S[table-format=3.0]
c
}
\toprule
{$t$} &
{$\Delta \text{MB}^{\prime}_{31}$} & {$\Delta[\sigma_3 - \sigma_1]$ } &
{Transient} &
{Cycles} &
{Extrema} \\
\midrule
0 & 0.0 & 0.0 & \#1 & 250 & initial \\
352 & -381.7 & -298.5 & \#1 & 250 & min \\
3131 & -2.1 & -3.2 & \#2 & 200 & max \\
3262 & -301.0 & -270.6 & \#3 & 100 & min \\
4812 & -146.9 & -133.9 & \#3 & 100 & max \\
4823 & -284.0 & -199.5 & \#4 & 100 & min \\
6523 & -330.0 & -284.2 & \#4 & 100 & min \\
6712 & -282.5 & -253.7 & \#4 & 100 & final \\
\bottomrule
\end{tabular}
\end{center}
\caption{\label{tbl:extrema} Extrema of the juxtaposed stress history of fig.~\ref{fig:extrema-1}}
\end{table}
First all the local extrema (i.e. whether a minimum or a maximum) need to be identified. [@Tbl:extrema] shows the times at which these occur, the stresses associated to them and the number of cycles that each of them is expected to occur. The initial and final stresses are also taken into account. It also illustrates the difference between the linearised stress and the plain Tresca scalar stress. To compute the global usage factor, we need to find all the combinations of these local extrema pairs and then sort them in decreasing order of stress difference. For example, the largest stress amplitude is found between $t=0$ and $t=352$ (this last instant contains the seismic load!). The second one is 352--3131. Then 0--6523, 3131--6523, etc. For each of these pairs, defined by the times\ $t{1,j}$ and $t{2,j}$, a partial usage factor\ $Uj$ should computed. The stress amplitude\ $S{\text{alt},j}$ which should be used to enter into the $S$-$N$ curve is
$$ S{\text{alt},j} = \frac{1}{2} \cdot k{\nu,j} \cdot k{e,j} \cdot \left| \Delta MB^\prime{t{1,j}} - \Delta MB^\prime{t{2,j}} \right| \cdot \frac{E\text{SN}}{E(T_{\text{max}_j})} $$
\noindent where $k_\nu$ and $ke$ are plastic correction factors for large loads (ASME\ part VIII div 2 sec 5.5.3.2 and part III NB-3228.5, respectively), $E\text{SN}$ is the Young’s Modulus used to create the $S$-$N$ curve (provided in the ASME fatigue curves) and\ $E(T_{\text{max}_j})$ is the material’s Young’s Modulus at the maximum temperature within the\ $j$-th interval.
We are now in a position where we can comply with ASME’s obscure note about the number of cycles to assign a proper value of\ $n_j$. Starting with the largest pair 0--352, we see that both extrema belong to transient #1 which has 250 cycles. This one is easy, because we associate directly $n_1=250$ and both of these times “dissappear” as they already consumed all of their cycles. The second largest pair was 352--3131 but 352 has just vanished so it is not considered anymore. The following is 0--6523 but zero also consumed all its cycles, so this pair is also discarded. The next is now 331--6523 where the first one belongs to transient #2 (200 cycles) and the latter to transient #4 (100 cycles). We assign $n_2=\min(200,100)=100$ and subtract 100 to both the cycles remaining to each time. Point 6523 dissapears as it consumed all its initial cycles and 3131 remains with 100 cycles. The next pair is 3131--3262, with number of cycles 100 (because we just took away 100 out of the initial 200) and 150 so $n_3=100$, point 3131 disappears and 3262 remains with 50 cycles. And so on, down to the last pair. [@Tbl:table-cuf] shows the results of applying this algorithm to all the extrema in SCL\ #1. The columns try to match the “official” solution of the US\ NRC [@nrc] to a sample problem proposed by the Electric Power Research Institute [@epri], which is shown in [@tbl:cuf-nrc].
dnl One table is taken from a document issued by almost-a-billion-dollar-year-budget government agency from the most powerful country in the world and the other one is from a third-world engineering startup. Guess which is which.
dnl ::::: {#tbl:cuf}
{#tbl:table-cuf width=100%}
dnl Tables of individual usage factors dnl :::::
Why all these details? Not because I want to teach you how to perform fatigue evaluations just reading this section without fully understanding the ASME code, taking college courses on material fatigue, reading books on the subject and even asking other colleagues. It is to show that even though these computation can be made “by hand” (i.e. using a calculator or, God forbids, a spreadsheet) when having to evaluate a few SCLs within several piping systems it is far (and I mean really far) better to automate all these steps by writing a set of scripts. Not only will the time needed to process the information be reduced, but also the introduction of human errors will be minimised and repeatability of results will be assured---especially if working under a distributed version control system such as Git. This is true in general, so here is another tip: learn how to write scripts to post-process your FEM results (you will need to use a script-friendly FEM program) and you will gain considerable margins regarding time and quality. See [@sec:online] to obtain the set of scripts that detected, matched and sorted the extrema and built [@fig:extrema-1] and [@tbl:extrema] automatically.
The fatigue curves and ASME’s (both\ III and\ VIII) methodology to analyse cyclic operations assume the parts under study are in contact with air, which is not the case of nuclear reactor pipes. Instead of building a brand new body of knowledge to replace ASME, the US\ NRC decided to modify the former by adding environmentally-assisted fatigue multipliers to the traditional usage factors, formally defined as
$$F\text{en} = \frac{N\text{air}}{N_\text{water}} \geq 1$$
Thus, the environmentally-assisted usage factor for the $j$-th load pair is
$$\text{CUF}_\text{en,j} = Uj \cdot F{\text{en},j}$$
\noindent and the global cumulative usage factor in water is the sum of these partial contributions
$$\text{CUF}_\text{en} = U1 \cdot F{\text{en},1} + U2 \cdot F{\text{en},2} + \dots + Uj \cdot F{\text{en},j} + \dots$${#eq:cufen}
In EPRI’s words, the general steps for performing an environmentally-assisted fatigue (EAF) analysis are as follows [@epri]:
Again, if $\text{CUF}\text{en} < 1$, then the system under study can withstand the assumed cyclic loads. Note that since\ $F{\text{en},j}>1$, it might be possible to have $\text{CUF} < 1$ and $\text{CUF}\text{en} > 1$ at the same time. The NRC has performed a comprehensive set of theoretical and experimental tests to study and analyse the nature and dependence of the non-dimensional correction factors\ $F\text{en}$ [@nrc]. It was found that, for a given material, they depend on:
a. the concentration\ $O(t)$ of dissolved oxygen in the water, b. the temperature\ $T(t)$ of the pipe, c. the strain rate\ $\dot{\epsilon}(t)$, and d. the content of sulphur\ $S(t)$ in the pipes (only for carbon or low-allow steels).
Apparently it makes no difference whether the environment is composed of either light or heavy water. There are somewhat different sets of non-dimensional analytical expressions that fit the value of\ $F{\text{en}}(t)$ as a function of\ $O(t)$, $T(t)$, $\dot{\epsilon}(t)$ and $S(t)$ to experimental data. Although they are not important now, the actual expressions should be defined and agreed with the plant owner and the regulator. The main result to take into account is that\ $F{\text{en}}(t)=1$ if\ $\dot{\epsilon}(t)\leq0$, i.e. there are no environmental effects during the time intervals where the material is being compressed.
Without further diving into another level of mathematical complexities and raising a plethora of detailed technical considerations, it is enough to directly show what the results of this EAF analysis for the imaginary test case in [@tbl:table-cufen]. Actually, SCL\ #1 was chosen throughout this section because the min/max extrema was simple and thus the explanation of the procedure was easier. It is SCL #4 the one that has a larger cumulative usage factor. Indeed, [@tbl:fatigue-scl-4-air] shows that the stress history at this location is more complex than in SCL# 1 as the heat conduction is stainless steel is smaller than in carbon steel and thus the temperature is less uniform during the transients as we already noted in [@fig:valve-temp]. However, stainless steel is less prone to degrade its fatigue strength in contact with water so the $F_\text{en}$ factors are smaller.
::::: {#fig:fatigue-scl-4}
{#tbl:fatigue-scl-4-air width=100%}
{#tbl:fatigue-scl-4-water width=45%}
Results of fatigue assessment in air and in water for SCL\ #4 :::::
We have travelled a non-negligible distance since we started this text. We wandered around a lot of issues, trying to solve a made-up but still pretty real-life-like engineering problem using finite elements as a our primary tool. We tried to understand how a nuclear reactor worked, analysed transient thermal situation, involved modal analysis and solved the elastic quasi-static problems taking into account a complex temperature distribution. Was our effort and the troubles we needed to go through really needed? To conclude this section (and almost the case) let me illustrate the importance of our path with the following sentence which is the one you should understand in case you have the chance to choose only one to remember. Had we used an infinite thermal diffusivity\ $\kappa=\infty$ for the materials, effectively treating the temperature as uniform throughout the pipes and the valve and equal to the instantaneous temperature $T(t)$ given in the transient definition, the worst-case SCL would have been\ #1 instead of SCL\ #4 as we recently found for the actual study case. The cumulative usage factors in air and in water would have been be approximately\ $0.007$ and\ $0.008$ respectively. And if instead of using an infinite diffusivity we had used a zero thermal expansion coefficient\ $\alpha=0$ such that only mechanical stresses were present (even with material properties depending on the actual transient temperatures) the usage factors would go down to\ $8\times 10^{-9}$ and $2\times 10^{-8}$. So yes, all the fuss was actually necessary.
divert(-1) Once we have the instantaneous factor\ $F{\text{en}}(t)$, we need to obtain an average value\ $F{\text{en},j}$ which should be applied to the\ $j$-th load pair. Again, there are a few different ways of lumping the time-dependent\ $F{\text{en}}(t)$ into a single $F{\text{en},j}$ for each interval. Both NRC and EPRI give simple equations that depend on a particular time discretisation of the stress histories that, in my view, are all ill-defined. My guess is that they underestimated their audience and feared readers would not understand the slightly-more complex mathematics needed to correctly define the problem. The result is that they introduced a lot of ambiguities (and even technical errors) just not to offend the maths illiterate. A decision I do not share, and a another reason to keep on learning and practising math.
::::: {#fig:cufen}
{#fig:cufen-nrc width=75%}
{#fig:cufen-seamplex width=75%}
Tables of individual environmental correction and usage factors for the NRC/EPRI “EAF Sample Problem 2-Rev.\ 2 (10/21/2011).” The reference method assigns the same\ $F_\text{en}$ to the first two rows whilst the proposed lumping scheme does show a difference ::::: divert(0)
Back in college, we all learned how to solve engineering problems. We already said that after we graduated, we felt we could solve and fix the world (once again, if you did not graduate yet, you will have this feeling shortly). But there is a real gap between the equations written in chalk on a blackboard (now probably in the form of beamer slide presentations) and actual real-life engineering problems. This chapter introduces a made-up yet almost-real case from the nuclear industry and starts by idealising the structure such that it has a known analytical solution that can be found in textbooks. Additional realism was added in stages allowing the reader to develop an understanding of the more complex physics in order to build a finite-element model so results can be obtained for cases where theoretical solutions are not available. Even more, a brief insight into the world of evaluation of stress-life fatigue using such results further illustrates the complexities of real-life engineering analysis---even though the presented case was simplified for the sake of clearness.
divert(-1) Here is a list of the tips and homeworks that arose throughout the text:
dnl You can ask for help in our mailing list at wasora@seamplex.com. There is a community of engineers willing to help you in case you get in trouble with the repositories, the script or the input files.
divert(-1) About your favourite FEM program, ask yourself these two questions:
divert(0)
I have been pouring some hints throughout the text which I learned the hard way. But here comes the last and most important one: at the end of the journey from college theory to solving an actual engineering problem, there will be at least one report with your signature on it. Make sure you understand what the implications of that signature is. That is why we all went to college in the first place.
Here is a list of sub-problems and stuff to play with.
The videos of the thermal transients in\ [@fig:valve]
The animations of natural oscillations in [@fig:modes]
A ready-to-play-with CAEplex case with the modal problem
A Git repository with the CAD files, input files and scripts needed to reproduce all the results discussed in the text using the free and open source tools Gmsh and Fino
See https://www.seamplex.com/nafems for new material, updated links and the full version of this case with many more details about the case and the associated mathematics.
{width=100%}