gtheler пре 7 година
родитељ
комит
dd4895b13c
1 измењених фајлова са 30 додато и 30 уклоњено
  1. +30
    -30
      nafems4.md

+ 30
- 30
nafems4.md Прегледај датотеку

@@ -12,14 +12,14 @@ Whether you are a student or a seasoned engineer with many years of experience,
![Simple pendulum](simple.svg){#fig:simple width=35%}\
![Real pendulum](hamaca.jpg){#fig:hamaca width=60%}

A simple pendulum from college physics courses and a real-life pendulum. Hint: the swing’s period _does_ depend on the hanging mass. See the [actual video](hamaca.webm).
A simple pendulum from college physics courses and a real-life pendulum. Hint: the swing’s period _does_ depend on the hanging mass. See the [actual video](hamaca.webm)
:::::

::::: {#fig:pipes}
![College pipe](infinite-pipe.svg){#fig:infinite-pipe width=40%}
![Real-life pipe](isometric.svg){#fig:isometric width=58%}

An infinitely-long pressurised thick pipe as taught in college and an isometric drawing of a section of a real-life piping system.
An infinitely-long pressurised thick pipe as taught in college and an isometric drawing of a section of a real-life piping system
:::::

Like the pendulums above, we will be swinging back and forth between a case study about fatigue analysis in piping systems of a nuclear power plant and more generic and even romantic topics related to finite elements and computational mechanics. These latter regressions will not remain just as abstract theoretical ideas. Not only will they be directly applicable to the development of the main case, but they will also apply to a great deal of other engineering problems tackled with the finite element method (and its cousins, [@sec:formulations]).
@@ -380,7 +380,7 @@ dnl 33410 07-3-4D-29

![Detail of the weldolet-type junction](weldolet-cad2.png){#fig:weldolet-cad2}

CAD model of a piping system with a 3/4-inch weldolet-type fork (stainless steel) from a main 12-inch pipe (carbon steel).
CAD model of a piping system with a 3/4-inch weldolet-type fork (stainless steel) from a main 12-inch pipe (carbon steel)
:::::


@@ -415,7 +415,7 @@ dnl 33300 02-D-3-4

![The SCLs are located at both sides of the material interface in the vertical plane both at the top and at the bottom of the pipe.](valve-scls1.png){#fig:valve-scls1 width=75%}

An example case where the SCLs are located around the junction between stainless-steel valves and carbon steel pipes.
An example case where the SCLs are located around the junction between stainless-steel valves and carbon steel pipes
:::::


@@ -427,7 +427,7 @@ As an example, let us consider the system depicted in\ [@fig:valve] where there

![Generalisation of the resulting temperature to the original mesh from figure [@fig:valve-mesh1]](valve-gen.png){#fig:valve-gen}

Computation of the thermal problem in a reduced mesh and generalisation of the result to the full original 3D mesh of figure\ [@fig:valve-mesh1].
Computation of the thermal problem in a reduced mesh and generalisation of the result to the full original 3D mesh of figure\ [@fig:valve-mesh1]
:::::


@@ -442,7 +442,7 @@ dnl 33410 10-12D-24

![Generalisation of the temperature from the reduced mesh to the full three-dimensional grid.](real-gen.png){#fig:real-gen width=80%}

The material interface in the system from [@fig:real-life] is configured by an orifice plate made of stainless steel welded to a carbon-steel pipe.
The material interface in the system from [@fig:real-life] is configured by an orifice plate made of stainless steel welded to a carbon-steel pipe
:::::


@@ -484,7 +484,7 @@ A real continuous solid has infinite modes of oscillation. A discretised one (us
![$i=5$](mode5.png){width=48%}\
![$i=6$](mode6.png){width=48%}

First six natural oscillation modes for a piping section.
First six natural oscillation modes for a piping section
:::::

These first modes that take up most of the energy are then used to take into account the earthquake load. There are several ways of performing this computation, but the ASME\ III code states that the method known as SRSS (for Square Root of Sum of Squares) can be safely used. This method mixes the eigenvectors with the floor response spectra through the eigenvalues and gives a spatial (actually nodal) distribution of three accelerations (one for each direction) that, when multiplied by the density of the material, give a vector of a distributed force (in units of Newton per cubic millimetre for example) which is statically equivalent to the load coming from the postulated earthquake.
@@ -496,7 +496,7 @@ These first modes that take up most of the energy are then used to take into acc

![$a_z$](az.png){width=80%}

The equivalent accelerations for the piping section of [@fig:modes] for the spectra of\ [@fig:spectrum].
The equivalent accelerations for the piping section of [@fig:modes] for the spectra of\ [@fig:spectrum]
:::::

The ASME code says that these accelerations (depicted in [@fig:acceleration]) ought to be applied twice: once with the original sign and once with all the elements with the opposite sign. Each application should last two seconds.
@@ -627,7 +627,7 @@ $$
![Case B, pure-shear loads](cube-shear.png){#fig:cube-shear width=48%}\
![Case C, normal plus shear loads](cube-full.png){#fig:cube-full width=48%}

Spatial distribution of principal stress\ 3 for cases\ B and\ C. If linearity applied, case\ C would be equal to case\ B plus a constant.
Spatial distribution of principal stress\ 3 for cases\ B and\ C. If linearity applied, case\ C would be equal to case\ B plus a constant
:::::

In the second case, the principal stresses are not uniform and have a non-trivial distribution. Indeed, the distribution of\ $\sigma_3$ obtained by [CAEplex](https://www.caeplex.com) is shown in\ [@fig:cube-shear].
@@ -635,7 +635,7 @@ Now, if we indeed were facing a fully linear problem then the results of the sum

So what is the source of this unexpected non-linear effect in an otherwise nice and friendly linear formulation? Well, probably you already know it because after all it is almost high-school mathematics. But I learned long after college, when I had to fac a real engineering problem and not just back-of-the-envelope pencil-and-paper trivial exercises.

Recall that principal stresses are the eigenvalues of the stress tensor. And the fact that in a linear elastic formulation the stress tensor of case\ C above is the sum of the individual stress tensors from cases\ A and B does not mean that their eigenvalues can be summed (think about it!). Again, imagine the eigenvalues and eigenvectors of cases A & B. Got it? Good. Now imagine the eigenvalues and eigenvectors for case\ C. Should they sum up? No, they should not! Let us make another experiment, this time with matrices using [Octave](https://www.gnu.org/software/octave/) or whatever other matrix-friendly program you want.
Recall that principal stresses are the eigenvalues of the stress tensor. And the fact that in a linear elastic formulation the stress tensor of case\ C above is the sum of the individual stress tensors from cases\ A and B does not mean that their eigenvalues can be summed (think about it!). Again, imagine the eigenvalues and eigenvectors of cases A & B. Got it? Good. Now imagine the eigenvalues and eigenvectors for case\ C. Should they sum up? No, they should not! Let us make another experiment, this time with matrices using [Octave](https://www.gnu.org/software/octave/) or whatever other matrix-friendly program you want (try to avoid black boxes as explained in\ [@sec:two-materials]).

First, let us create a 3 $\times$ 3 random matrix $R$ and then multiply it by its transpose\ $R^T$ to obtain a symmetric matrix\ $A$ (recall that the stress tensor from [@sec:tensor] is symmetric):

@@ -719,18 +719,18 @@ No need to know or even understand these two integrals which for sure are not in

# The infinite pipe revisited after college {#sec:infinite-pipe-fem}

Let us now make a (tiny) step from the general and almost philosophical subject from the last section down to the particular case, and reconsider the infinite pressurised pipe once again. It is time to solve the problem with a computer using finite elements and obtain some funny coloured pictures instead of just equations.
Let us now make a (tiny) step from the general and almost philosophical subject from the last section down to the particular case study, and reconsider the infinite pressurised pipe once again. It is time to solve the problem with a computer using finite elements and to obtain some funny coloured pictures instead of just equations.

The first thing that has to be said is that, as with any interesting problem, there are literally hundreds or different ways of solving it, each of them throwing particular conclusions. For example, one can:

1. solve a real 3D problem or a 2D axi-symmetric case (or even a 1D case using the collocation method to solve the radial dependence),
1. solve a real 3D problem or a 2D axi-symmetric case (or even a 1D case using the [collocation method](https://en.wikipedia.org/wiki/Collocation_method) to solve the radial dependence),
2. have a full cylindrical geometry or just a half (180º), or a quarter (90º), or a thin slice (a small amount of degrees),
3. use a structured or an unstructured grid,
4. uniformly or locally-refine the mesh (with several choices of refinement),
5. use first or second-order (or higher) elements,
6. use tetrahedra or hexahedra,
7. in the case of second-order hexahedra, use complete (i.e. 27-node hexahedron) or incomplete (i.e. 20-node hexahedron) elements,
8. have different mesh refinements from very coarse to very fine,
8. have different mesh sizes from very coarse to very fine,
9. solve the same problem in a few different solvers,
10. etc.

@@ -742,21 +742,21 @@ Two of the hundreds of different ways the infinite pressurised pipe can be solve
:::::


You can get both the exponential nature of each added bullet and how easily we can add new further choices to solve a FEM problem. And each of these choices will reveal you something about the nature of either the mechanical problem or the numerical solution. It is not possible to teach any possible lesson from every outcome in college, so you will have to learn them by yourself getting hands at them. I have already tried to address the particular case of the infinite pipe in a [recent report](https://www.seamplex.com/fino/doc/pipe-linearized/) that is worth reading before carrying on with this article. The conclusions of the report are:
You can get both the exponential nature of each added bullet and how easily we can add new further choices to solve a FEM problem. And each of these choices will reveal you something about the nature of either the mechanical problem or the numerical solution. It is not possible to teach any possible lesson from every outcome in college, so you will have to learn them by yourself getting your hands at them. I have already tried to address the particular case of the infinite pipe in a [recent report](https://www.seamplex.com/fino/doc/pipe-linearized/) that is worth reading before carrying on with this article. The conclusions of the report are:

* Engineering problems ought not to be solved using black-boxes (i.e. privative software whose source code is not freely available)---more on the subject below in\ [@sec:two-materials].
* The pressurised infinite pipe has only one independent variable (the radius $r$) and one primary dependent variable (the radial displacement $u_r$).
* The problem has analytical solution for the radial displacement and the radial, tangential and axial stresses.
* The problem has analytical solution for the radial displacement\ $u_r$ and the radial\ $\sigma_r$, tangential\ $\sigma_\theta$ and axial\ $\sigma_z$ stresses.
* There are no shear stresses, so these three stresses are also the principal stresses.
* Analytical expressions for the membrane and membrane plus bending stresses along any radial SCL can be obtained.
* The spatial domain can be discretised using linear or higher-order elements. In particular first and second-order elements have been used in this report.
* For the same number of elements, second-order grids need more nodes than linear ones, although they can better represent curved geometries.
* The discretised problem size depends on the number of nodes and not on the number of elements.
* The discretised problem size depends on the number of nodes and not on the number of elements---more on the subject below in\ [@sec:elements-nodes].
* The finite-element results for the displacements are similar to the analytical solution, with second-order grids giving better results for the same number of elements (this is expected as they involved far more nodes).
* The three stress distributions computed with the finite-element give far more reasonable results for the second-order case than for the first-order grid. This is qualitatively explained by the fact that first-order tetrahedra have uniform derivatives and such the elements located in both the external and external faces represent the stresses not at the actual faces but at the barycentre of the elements.
* Membrane stresses converge well for both the first and second-order cases because they represent a zeroth-order moment of the stress distribution and such the excess and defect errors committed by the internal and external elements approximately cancel out.
* Membrane plus bending stresses converge very poorly in first-order grids because the excess and defect errors do not cancel out because it is a first-order moment of the stress distribution.
* The computational effort to solve a given problem, namely the CPU time and the needed RAM depend non-linearly on various factors, but the most important one is the problem size which is three times the number of nodes in the grid.
* Membrane stresses converge well for both the first and second-order cases because they represent a zeroth-order moment of the stress distribution and the excess and defect errors committed by the internal and external elements approximately cancel out.
* Membrane plus bending stresses converge very poorly with linear elements because the excess and defect errors do not cancel out because it is a first-order moment of the stress distribution.
* The computational effort to solve a given problem, namely the CPU time and the needed RAM depend non-linearly on various factors, but the most important one is the problem size which is three times the number of nodes in the grid--more on the subject below in\ [@sec:elements-nodes].
* The error with respect to the analytical solutions as a function of the CPU time needed to compute the membrane stress is similar for both first and second-order grids. But for the computation of the membrane plus bending stress ([@fig:error-MB-vs-cpu]), first-order grids give very poor results compared to second-order grids for the same CPU time.

::::: {#fig:error-vs-cpu}
@@ -770,9 +770,9 @@ Error in the computation of the linearised stresses vs. CPU time needed to solve

The last two bullets above lead to an issue that has come many times when discussing the issue of convergence with respect to the mesh size with other colleagues. There apparently exists a common misunderstanding that the number of elements is the main parameter that defines how complex a FEM model is. This is strange, because even in college we are taught that the most important parameter is the size of the stiffness matrix, which is three times (for 3D problems with the [displacement-based formulation](http://web.mit.edu/kjb/www/Books/FEP_2nd_Edition_4th_Printing.pdf) formulation) the number of _nodes_.

Let us pretend we have to compare two different FEM programs so we solve the same problem in each and see what the results are. I have seen many times the following situation: the user loads the same geometry in both programs, meshes them so that the number of elements is more or less them same (because she wants to be “fair”) and solve the problem. Voilà! It turns out that the first program defaults to first-order elements and the second one to second-order elements. So if the first one takes one minute to obtain a solution, the second one should take nearly four minutes. How come that is a fair comparison? Or it might be the case that one program uses tetrahedra while the other one hexahedra. Or any other combination. In general, there is no single problem parameter that can be fixed to have a “fair” comparison, but if there were it would definitely be the number of nodes rather than the number of elements. Let us see why.
Let us pretend we are given the task of comparing two different FEM programs. So we solve the same problem in each one and see what the results are. I have seen many times the following situation: the user loads the same geometry in both programs, run the meshing step in both of them so that the number of elements is more or less them same (because she wants to be “fair”) and then solves the problem. Voilà! It turns out that the first program defaults to first-order elements and the second one to second-order elements. So if the first one takes one minute to obtain a solution, the second one should take nearly four minutes. How come that is a fair comparison? Or it might be the case that one program uses tetrahedra while the other one defaults to hexahedra. Or any other combination. In general, there is no single problem parameter that can be fixed to have a “fair” comparison, but if there was, it would definitely be the number of nodes rather than the number of elements. Let us see why.

Fire up your imagination again and make a [thought experiment](https://en.wikipedia.org/wiki/Thought_experiment) in which you have to compare say a traditional FEM approach with a new radical formulation that a crazy mathematician from central Asia came up with claiming it is a far superior theory than our beloved finite elements (or for the case, any other formulation from\ [@sec:formulations]). How can we tell if the guy is really a genius or purely nuts? Well, we could solve a problem which we can compute the analytical solution (for example the infinite pipe from\ @[sec:infinite-pipe]) first with the traditional method (@[sec:infinite-pipe-fem]) and then with the program which uses the new formulation. Say the traditional FEM gives an error between 1% and 5% running in a few seconds depending on the mesh size. The new program from the crazy guy takes no input parameters and gives an error of 0.001%, but it takes one week of computation to produce a result. Would you say that the new radical formulation is really far superior?
Fire up your imagination again and make a [thought experiment](https://en.wikipedia.org/wiki/Thought_experiment) in which you have to compare say a traditional FEM approach with a new radical formulation that a crazy mathematician from central Asia came up with claiming it is a far superior theory than our beloved finite elements (or for the case, any other formulation from\ [@sec:formulations]). How can we tell if the guy is really a genius or purely nuts? Well, we could solve a problem which we can compute the analytical solution (for example the infinite pipe from\ [@sec:infinite-pipe]) first with the traditional method ([@sec:infinite-pipe-fem]) and then with the program which uses the new formulation. Say the traditional FEM gives an error between 1% and 5% running in a few seconds depending on the mesh size. The new program from the crazy guy takes no input parameters and gives an error of 0.1%, but it takes one week of computation to produce a result. Would you say that the new radical formulation is really far superior?

What I would do is to run a FEM program that takes also one week to compute, and only then compare the errors. So that is why\ [@fig:error-vs-cpu] uses the CPU time in the abscissas rather than the number of elements to compare first and second-order formulations.

@@ -789,9 +789,9 @@ The effort needed to compute a discretisation of a continuous domain depends on

### Building {#sec:building}

The stiffness matrix is a square matrix that has\ $NG$ rows and\ $NG$ columns where $N$ is the number of nodes and $G$ is the number of degrees of freedom per node, which for three-dimensional problems is $G=3$. But even though FEM problems have to build a $NG\times NG$ matrix, they usually sweep through elements rather than through nodes, and then scatter the elements of the elemental matrices to the global stiffness matrix. So the effort needed here depends again on how the solver is programmed, but it is a combination of the number of elements and the number of nodes.
The [stiffness matrix]([stiffness matrix](https://en.wikipedia.org/wiki/Stiffness_matrix)) is a square matrix that has\ $NG$ rows and\ $NG$ columns where $N$ is the number of nodes and $G$ is the number of degrees of freedom per node, which for three-dimensional problems is $G=3$. But even though FEM problems have to build a $NG\times NG$ matrix, they usually sweep through elements rather than through nodes, and then scatter the elements of the elemental matrices to the global stiffness matrix. So the effort needed here depends again on how the solver is programmed, but it is a combination of the number of elements and the number of nodes.

For a fixed number of nodes, first-order grids have far more elements than second-order grids because in the first case each node has to be a vertex while in the latter half will be vertexes and half will be located at the edges (think! FIGURE?). So the sweep is larger for linear grids. But the effort needed to integrate properties using quadratic shape functions is greater than for the linear case, so these two effects almost cancel out.
For a fixed number of nodes, first-order grids have far more elements than second-order grids because in the first case each node has to be a vertex while in the latter half will be vertexes and half will be located at the edges (think!). So the sweep is larger for linear grids. But the effort needed to integrate properties using quadratic shape functions is greater than for the linear case, so these two effects almost cancel out.

### Solving {#sec:solving}

@@ -799,27 +799,27 @@ The linear FEM problem leads of course of a system of\ $NG$ linear equations, ca

$$K \cdot \mathbf{u} = \mathbf{b}$${#eq:kub}

The objective of the solver is to find the vector\ $u$ of nodal displacements that satisfy the momentum equilibrium. Luckily (well, not purely by chance but by design) the stiffness matrix is almost empty. It is called a [sparse matrix](https://en.wikipedia.org/wiki/Sparse_matrix) because most of its elements are zero. If it was fully filled, then a problem with just 100k nodes would need more than 700\ Gb of RAM just to store the matrix elements, rendering FEM as practically impossible. And even though the stiffness matrix is sparse, its inverse is not so we cannot solve the elastic problem by “inverting” the matrix. Particular methods to represent and more importantly to solve linear systems involving these kind of matrices have been developed, which are the methods used by finite-element (and the other finite-cousins) programs. These methods are [iterative](https://en.wikipedia.org/wiki/Iterative_method) in nature, meaning that at each step of the iteration the numerical answer improves, i.e. $\left|K \cdot \mathbf{u} - \mathbf{b}\right| \rightarrow 0$. Even though in particular for [Krylov-subspace](https://en.wikipedia.org/wiki/Krylov_subspace) methods, $K \cdot \mathbf{u} - \mathbf{b} = \mathbf{0}$ after\ $NG$ steps, a few dozen of iterations are usually enough to assume that the problem is effectively solve (i.e. the residual is less than a certain threshold).
The objective of the solver is to find the vector\ $\mathbf{u}$ of nodal displacements that satisfy the momentum equilibrium. Luckily (well, not purely by chance but by design) the stiffness matrix is almost empty. It is called a [sparse matrix](https://en.wikipedia.org/wiki/Sparse_matrix) because most of its elements are zero. If it was fully filled, then a problem with just 100k nodes would need more than 700\ Gb of RAM just to store the matrix elements, rendering FEM as practically impossible. And even though the stiffness matrix is sparse, its inverse is not so we cannot solve the elastic problem by “inverting” the matrix. Particular methods to represent and more importantly to solve linear systems involving these kind of matrices have been developed, which are the methods used by finite-element (and the other finite-cousins) programs. These methods are [iterative](https://en.wikipedia.org/wiki/Iterative_method) in nature, meaning that at each step of the iteration the numerical answer improves, i.e. $\left|K \cdot \mathbf{u} - \mathbf{b}\right| \rightarrow 0$. Even though in particular for [Krylov-subspace](https://en.wikipedia.org/wiki/Krylov_subspace) methods, $K \cdot \mathbf{u} - \mathbf{b} = \mathbf{0}$ after\ $NG$ steps, a few dozen of iterations are usually enough to assume that the problem is effectively solved (i.e. the residual is less than a certain threshold).

So the question is... how hard is to solve a sparse linear problem? Well, the number of iterations needed to attain convergence depends on the [spectral radius](https://en.wikipedia.org/wiki/Spectral_radius) of the stiffness matrix\ $K$, which in turns depend on the elements themselves, which depend mainly on the parameters of the elastic problem. But it also depends on how sparse\ $K$ is, which changes with the element topology and order. [@Fig:test] shows the structure of two stiffness matrices for the same linear elastic problem discretised using exactly the same number of nodes with first and second-order elements. The matrices have the same size (because the number of nodes is the same) but the former is more sparse than the latter. Hence, it would be harder (in computational terms of CPU and RAM) to solve the second-order case.
So the question is... how hard is to solve a sparse linear problem? Well, the number of iterations needed to attain convergence depends on the [spectral radius](https://en.wikipedia.org/wiki/Spectral_radius) of the stiffness matrix\ $K$, which in turns depend on the elements themselves, which depend mainly on the parameters of the elastic problem. But it also depends on how sparse\ $K$ is, which changes with the element topology and order. [@Fig:test] shows the structure of two stiffness matrices for the same linear elastic problem discretised using exactly the same number of nodes but using linear and quadratic elements respectively. The matrices have the same size (because the number of nodes is the same) but the former is more sparse than the latter. Hence, it would be harder (in computational terms of CPU and RAM) to solve the second-order case.

::::: {#fig:test}
![42k first-order elements](test1.png){#fig:test1 width=48%}\
![15k second-order elements](test2.png){#fig:test2 width=48%}

Structure of the stiffness matrices for the same FEM problem with 10k nodes. Blue (red) are positive (negative) elements.
Structure of the stiffness matrices for the same FEM problem with 10k nodes. Blue (red) are positive (negative) elements
:::::

In a similar way, different types of elements will give rise to different sparsity patterns which change the effort needed to solve the problem. In any case, the base parameter that controls the problem size and thus provides a basic indicator of the level of difficulty the problem poses is the number of nodes. And again, not the number of elements.

### Stress computation {#sec:stress-computation}

In the [displacement-based formulation](http://web.mit.edu/kjb/www/Books/FEP_2nd_Edition_4th_Printing.pdf), the solver just “solves” for the displacements\ $\mathbf{u}(\mathbf{x})$ which are the principal unknowns. But from\ [@sec:tensor] we know that we actually “solved” the problem when we have the stress tensors at every location\ $\mathbf{x}$, which are the secondary unknowns. So the FEM program has to compute the stresses from the displacements using the materials’ [strain-stress constitutive equations](https://en.wikipedia.org/wiki/Constitutive_equation#Stress_and_strain) which involve the Young Modulus\ $E$, the Poisson ratio\ $\nu$ and the spatial derivatives of the displacements\ $\mathbf{u}=[u,v,w]$. This sounds easy, as we (well, the solver) knows what the shape functions are for each element and then it is a matter of computing nine derivatives (that of the three displacements in each of the three directions) and multiplying by something involving\ $E$ and\ $\nu$. Yes, but there is a catch. As the displacements\ $u$, $v$ and\ $w$ are computed at the nodes, we would like to also have the stresses at the nodes. However,
In the [displacement-based formulation](http://web.mit.edu/kjb/www/Books/FEP_2nd_Edition_4th_Printing.pdf), the solver finds the displacements\ $\mathbf{u}(\mathbf{x})$ that satisfy [@eq:kub], which are the principal unknowns. But from\ [@sec:tensor] we know that we actually have solved the problem after we have the stress tensors at every location\ $\mathbf{x}$, which are the secondary unknowns. So the FEM program has to compute the stresses from the displacements using the materials’ [strain-stress constitutive equations](https://en.wikipedia.org/wiki/Constitutive_equation#Stress_and_strain) which involve the Young Modulus\ $E$, the Poisson ratio\ $\nu$ and the spatial derivatives of the displacements\ $\mathbf{u}=[u,v,w]$. This sounds easy, as we (well, the solver) knows what the shape functions are for each element and then it is a matter of computing nine derivatives (that of the three displacements in each of the three directions) and multiplying by something involving\ $E$ and\ $\nu$. Yes, but there is a catch. As the displacements\ $u$, $v$ and\ $w$ are computed at the nodes, we would like to also have the stresses at the nodes. However,

i. the displacements\ $\mathbf{u}(\mathbf{x})$ are not differentiable at the nodes, and
ii. if the node belongs to a material interface, neither\ $E$ nor\ $\nu$ are defined.

Let us focus on the first item and leave the second one for a separate discussion in\ [@sec:two-materials]. The finite-element method computes the principal unknowns at the nodes and then says “interpolate the nodal values inside each element using its shape functions.” It sounds (and it is) great, but a node belongs to more than one element (you can now imagine a 3D mesh composed of tetrahedra but you can also simplify your mind pictures by thinking in just one dimension: a node is shared by two segments). So the slope of the interpolation when we move from the node into one element might (and it never is) the same as when we move from the same node into another element. Stop and think. Now let us take a look at [@fig:derivatives]. It shows four finite-element solutions for a certain problem that has a cosine as the analytical solution. All the cases use eight elements: either uniformly distributed along the domain $x \in [-1:1]$ or only one element in the negative half and the remaining seven evenly distributed between zero and one, configuring a rather extreme non-uniform grid to better illustrate the point. Both first and second-order elements were used for each mesh, hence obtaining four combinations. We know the derivative of the analytical solution is zero at $x=0$. In the uniform cases of [@fig:slab-1-0;@fig:slab-2-0], if we took the average of the derivatives at each side of the origin we would obtain zero as expected. But in the non-uniform cases\ [@fig:slab-1-1;@fig:slab-2-1], plain averaging fails even in the quadratic case.
Let us focus on the first item and leave the second one for a separate discussion in\ [@sec:two-materials]. The finite-element method computes the principal unknowns at the nodes and then says “interpolate the nodal values inside each element using its shape functions.” It sounds (and it is) great, but a node belongs to more than one element (you can now imagine a 3D mesh composed of tetrahedra but you can also simplify your mind pictures by thinking in just one dimension: a node is shared by two segments). So the slope of the interpolation when we move from the node into one element might (and it never is) the same as when we move from the same node into another element. Stop and think. Now let us take a look at [@fig:derivatives]. It shows four finite-element solutions for a certain problem that has a cosine as the analytical solution. All the cases use eight elements: either uniformly distributed along the domain $x \in [-1:+1]$ or only one element in the negative half and the remaining seven evenly distributed between zero and one, configuring a rather extreme non-uniform grid to better illustrate the point. Both first and second-order elements were used for each mesh, hence obtaining four combinations. We know the derivative of the analytical solution is zero at $x=0$. In the uniform cases of [@fig:slab-1-0;@fig:slab-2-0], if we took the average of the derivatives at each side of the origin we would obtain zero as expected. But in the non-uniform cases\ [@fig:slab-1-1;@fig:slab-2-1], plain averaging fails even in the quadratic case.

::::: {#fig:derivatives}
![](slab-1-0.svg){#fig:slab-1-0 width=48%}\
@@ -831,9 +831,9 @@ Let us focus on the first item and leave the second one for a separate discussio
Solution of a problem using FEM using eight linear/quadratic uniform/non-uniform elements. The reference solution is a cosine. Plain averaging works for uniform grids but fails in the non-uniform cases.
:::::

Now proceed to picturing the general three-dimensional cases with unstructured tetrahedra. What is the derivative of the displacement\ $v$ in the\ $y$ direction with respect to the $z$ coordinate at a certain node shared by many tetrahedra? What if one of the elements is very small? Or it has a very bad quality (i.e. it is deformed in one direction) its derivatives cannot be trusted? Should we still average? Should this average be weighted? How?
Now proceed to picturing the general three-dimensional cases with unstructured tetrahedra. What is the derivative of the displacement\ $v$ in the\ $y$ direction with respect to the $z$ coordinate at a certain node shared by many tetrahedra? What if one of the elements is very small? Or it has a very bad quality (i.e. it is deformed in one direction) and its derivatives cannot be trusted? Should we still average? Should this average be weighted? How?

Detailed mathematics show that the location where the derivatives of the interpolated displacements are closer to the real (i.e. the analytical in problem that have it) solution are the elements’ [Gauss points](https://en.wikipedia.org/wiki/Gaussian_quadrature). Even better, the material properties at these points are continuous (they are usually uniform but they can depend on temperature for example) because, unless we are using weird elements, there are no material interfaces inside elements. But how to manage a set of stresses given at the Gauss points instead of at the nodes? Should we use one mesh for the input and another one for the output? What happens when we need to know the stresses on a surface and not just in the bulk of the solid? There are still no one-size-fits-all answers. There is a very interesting [blog post](http://tor-eng.com/2017/11/practical-tips-dealing-stress-singularities/) by Nick Stevens that addresses the issue of stresses computed at sharp corners. What does your favourite FEM program do with such a case?
Detailed mathematics show that the location where the derivatives of the interpolated displacements are closer to the real (i.e. the analytical ones in problems that have it) solution are the elements’ [Gauss points](https://en.wikipedia.org/wiki/Gaussian_quadrature). Even better, the material properties at these points are continuous (they are usually uniform but they can depend on temperature for example) because, unless we are using weird elements, there are no material interfaces inside elements. But how to manage a set of stresses given at the Gauss points instead of at the nodes? Should we use one mesh for the input and another one for the output? What happens when we need to know the stresses on a surface and not just in the bulk of the solid? There are still no one-size-fits-all answers. There is a very interesting [blog post](http://tor-eng.com/2017/11/practical-tips-dealing-stress-singularities/) by Nick Stevens that addresses the issue of stresses computed at sharp corners. What does your favourite FEM program do with such a case?

In any case, this step takes a non-negligible amount of time. The most-common approach, i.e. the node-averaging method is driven mainly by the number of nodes of course. So all-in-all, these are the reasons to use the number of nodes instead of the numbers of elements as a basic parameter to measure the complexity of a FEM problem.


Loading…
Откажи
Сачувај