## Abstract

A simple procedure for estimating the uncertainty of estimates of true solutions to problems of deflection, stress concentrations, and force resultants in solid and structural mechanics is introduced. Richardson extrapolation is carried out on a dataset of samples from a sequence of four grids. Simple median-based statistical analysis is used to establish 95% confidence intervals. The procedure leads to simple calculations that deliver reasonably tight estimates of the true solution and confidence intervals.

## Introduction

Models based on partial differential equations are commonplace in science and engineering. Due to the unavailability of analytical solutions in general, such models are converted to discrete forms, and a computer solution is sought. Discretization errors are an unavoidable consequence of the numerical scheme (e.g., finite difference, finite element, or finite volume methods) used to obtain a discrete approximation of the continuous equations. Solution verification is a process by which the discretization error of the quantities of interest is estimated [1]. The goal is to estimate both the error and the uncertainty associated with that estimate [2].

An important characteristic of discretization schemes is the convergence rate, which is the exponent in the power law term relating the numerical truncation error to the value of a parameter associated with the discretization, usually given by the size of the finite element. The factor multiplying this term gives a measure of the overall error of a given scheme. The standard method by which to estimate the error is systematic mesh refinement. Richardson extrapolation (usually traced to Ref. [3]) is currently one of the (if not *the*) most reliable method for the prediction of numerical uncertainty of the error estimation (c. f. Ref. [4]).

The grid convergence index (GCI) of Roache was perhaps the earliest attempt to quantify the numerical uncertainty associated with inferred convergence parameters [5,6]. Safety factors were used to approach a 95% confidence level. This approach was extended to the correction factor method of Ref. [7]. Xing and Stern later delved deeper, with somewhat pessimistic assessment of the reliability of then current approaches [8].

Heuristics to estimate the numerical uncertainty associated with a set of computed results have been proposed by Eca and Hoekstra [9]. There appears to be an assumption that the underlying numerical scheme has a known theoretical convergence rate, and that this rate is realized in simulations.

The robust verification analysis (RVA) method [10] was developed as an alternative to the GCI method. One of the goals was to allow for verification based on as few result sets as possible. Another goal was to avoid the use of safety factors. A distinguishing characteristic was the calculation of the numerical uncertainty from median-based statistics from a number of constrained-optimization fits the numerical error model. These fits allowed constraints to introduce expert judgment. Various fitting models were used, such as multiple residual norms, various weightings of the data, regularization methods, fits with fixed orders of convergence, and fits for various subsets of the response data. This method also permits (using statistics of multiple fit estimates) the use of the converged result (extrapolation) as the best guess, and estimates the uncertainty about that value, in contrast to the GCI method which estimates the uncertainty about the most resolved simulation value.

The formulation of the RVA approach [10] included regularization methods in order to treat under-determined cases: fits to error models performed on a less-than minimal set of response data (such as a two-point Richardson extrapolation). The under-determined fitting was partially overcome by including regularized optimization methods (Tikhonov, LASSO - least absolute shrinkage and selection operator). Unfortunately, the results of the under-determined fitting would to some degree depend on the regularization employed, and the details of the regularization were not provided in the paper.

The RVA [10] is evidently very powerful. Its sophistication comes with a price tag, though: considerable constrained-optimization software machinery is required, and unless this comes prepackaged, the user also needs to acquire substantial skills to put together the individual software components, and also the statistical sophistication to interpret the results.

There is a newer approach that builds on the RVA [11]. It offers the following critique of the RVA:

Of primary concern is the extensive use of under-determined (e.g., two-point) and critically-determined (e.g., three-point) data sets. In the under-determined case, there are infinitely many solutions limiting the usefulness of such estimates and providing ample opportunity for many uncontrolled pathologies (e.g., strong dependence on initial guess, or a strong directional bias); this could be better addressed by increasing the use of regularization methods, although a strategy for specifying the regularization penalties should also be addressed. In the critically-determined case, all of the nonregularized fits will be exact repetitions of each other, and this fact is not taken into consideration in the Robust Verification method.

The approach called stochastic robust extrapolation error quantification, or StREEQ, was offered as a replacement. However, StREEQ was mainly developed to deal with results from stochastic simulations, even though deterministic simulations were also targeted. The claim was made that it was a robust tool for solution verification of stochastic and deterministic simulation results. The statistical robustness was due to the use of a bootstrap resampling of the response data to increase the extrapolation data pool. The inherent statistical power of the estimation procedure needs to be matched by the statistical know-how on the part of the user.

There GCI approach had been applied to solid and structural mechanics problems in Ref. [12]. A related development reports conservative error estimates based on the notions of Richardson extrapolation in stress-concentration problems [13]. Both of these approaches are compared with the present methodology in our examples.

We offer here a very simple Richardson-extrapolation procedure, which only requires the solution of a single nonlinear equation and trivial median-based statistical processing. The result is an estimate of the true solution and the convergence rate, both with estimates of the 95% confidence interval. In Sec. 1 we first introduce a classical Richardson extrapolation from results obtained using three different grids (meshes). Then we continue by introducing the estimation of the uncertainty of the extrapolation. We identify the sources of the uncertainty, and we propose to obtain a solution on a fourth grid (mesh) in order to build up a dataset of extrapolations to aid in the quantification of uncertainty. The dataset consists of extrapolations based on samples constructed from the four grids. Robust median-based statistics are applied to the dataset to extract the best guess of the true solution, and the median absolute deviation from the median is employed as a measure of the uncertainty of the best guess. In Sec. 2 we illustrate the proposed methodology with solid-mechanics examples: a stress concentration solved with an assumed-strain 3D finite element model, a shell benchmark solved with a shear-gap triangular facet finite element, and a shear-rigid plate problem solved with a commercial finite element code. The computed confidence intervals are discussed in the light of comparisons with established procedures, such as the GCI. Section 3 considers complications due to the failure to obtain data from which extrapolation is possible. Possible extensions are also discussed.

## 1 Methods and Materials

Richardson extrapolation (usually traced in history to Ref. [3]) is currently the most reliable method for the prediction of numerical uncertainty of results from discretized models (c.f. Ref. [4]). As described aptly in [14], the two basic requirements for its applicability are:

The grid refinement is defined by a single parameter, the “element size”; and

The solutions on the grid are in the asymptotic range, meaning that a single term of a Taylor series expansion dominates the error. (We shall use the terminology “mesh” and “grid” interchangeably in what follows.)

### 1.1 Extrapolation From Three Grids.

First, we outline the classical Richardson extrapolation from results obtained on three grids (meshes).

#### Element Size.

*M*

_{0}, and produce a series of progressively finer meshes from

*M*

_{0}by scaling the element size as a function of the position $x\u2208\Omega $ by a constant, the

*refinement factor*$\alpha <1$, everywhere in the domain Ω, such that

For instance, mesh *M*_{1} would be produced with element size $h1(x)=\alpha h0(x)$, mesh *M*_{2} with element size $h2(x)=\alpha h1(x)=\alpha 2h0(x)$, and so on.

In the above, the refinement factor was assumed constant when going from mesh *M*_{0} to *M*_{1} to *M*_{2} and so on. Strictly speaking, keeping the refinement factor constant is not necessary. We may choose a different refinement factor for each new mesh. Thus, mesh *M*_{1} could be produced with element size $h1(x)=\alpha 0h0(x)$, mesh *M*_{2} with element size $h2(x)=\alpha 1h1(x)=\alpha 0\alpha 1h0(x)$, and so on.

*relative element size*

*x*. In the following, we shall consider the relative element size a continuous variable, and the extrapolation will be performed with respect to this variable. For any particular mesh

*j*, we may write the relative element size in terms of the refinement factors between successive meshes as

#### Error Expansion.

*q*may be expanded in a Taylor series at element size

*h*=

*0 (or, alternatively relative element size*

*γ*= 0), we may write

*γ*, $Eq(\gamma )$ is the true error,

*C*is an unknown constant of the leading term $\gamma \beta $, with the rate of convergence, $\beta >0$, again, unknown. Provided

*C*, and

*β*do not depend on

*γ*for small element sizes (this is presumed to hold in the asymptotic range), we might be able to estimate all of the three quantities $qex$,

*C*, and

*β*, if three numerical solutions, $q\gamma i$, are obtained for three different relative element sizes, $\gamma 1<\gamma 2<\gamma 3$. Then, we have a sequence of solutions $q\gamma j$, and so we can write the so-called approximate error as the difference between two successive solutions

*C*to yield

*β*. Then the constant

*C*follows as:

and the estimate of the true error for any element size can then be calculated from Eq. (4). Obviously, with the estimate of the true error, we also have the estimate of the true solution $qex$.

### 1.2 Estimate of the Uncertainty.

It is common knowledge that the three-grid procedure from Sec. 1.1 cannot be used to quantify the uncertainty of the true solution estimate: four grids are needed [9]. The reason is that there are three unknown parameters, and the solutions from three grids supply just enough equations to solve for the three unknowns. The result of the extrapolation is a number—the estimate of the true solution—without any indication of the uncertainty of this number.

Where does this uncertainty come from? We can identify at least three sources:

The succession of grids should be ideally based on nesting. However, this is typically not the case, and unstructured grids are generated by manipulating some form of mesh control to produce meshes which approximately satisfy (1). This will introduces more or less random error in the target quantity.

The solutions obtained may not (all) be in the asymptotic range. The consequence are errors introduced in the extrapolation procedure, which are not necessarily all random.

Computer-arithmetic errors may introduce errors in the true-solution estimation. Loss of significant digits, or inherent limits of processes iterating to convergence will introduce random errors into the computed numbers that are being piped into the extrapolation. Consequently, the extrapolations will be effected by these random errors.

Four or more grids have been used in Ref. [9]. The error was estimated by fitting the expansions of the form (4) to the data in the least-squares sense. The selection of the best error estimate was founded on the standard deviation of the fits. The error estimate was formed into an uncertainty using a safety factor. The safety factor was proposed to depend on the observed convergence rate and the magnitude of the standard deviation of the fit.

Robust verification analysis was introduced by Rider et al. [10]. Results from simulations with multiple grids were treated with optimization instead of with linear or nonlinear regression. The measure of central tendency was selected to be the median instead of the mean. This choice was motivated by the median's robustness to outliers [15]. In fact, optimization in the *L*_{1} norm automatically removed outliers from the data. The optimization started from results obtained with fine meshes, and new estimates for course meshes were progressively added and the optimization fit was recomputed for each additional result. From the considerable machinery required for the implementation of the RVA procedure, we derive a readily executable process that does not require anything beyond solving the Eqs. (9), (10), and (4).

#### Estimate of the True Solution.

One of the main ideas in this work is to form three tuples of inputs to the extrapolation process (9), (10), and (4) by sampling from the results obtained using four meshes. Table 1 shows the sampling scheme. The four meshes with relative element size *γ _{j}* result in computed numbers $q\gamma j$. Then the sampling in row one of the table produces the extrapolation $qex(1)$, as indicated by the bullets. The four samplings thus result in four predictions $qex(k)$, where $k=1,2,3,4$. These are subsequently used in the estimation of uncertainty.

Inputs | $q\gamma 1$ | $q\gamma 2$ | $q\gamma 3$ | $q\gamma 4$ | Extrapolation |
---|---|---|---|---|---|

● | ● | ● | $\u21d2\u2003qex(1)$ | ||

● | ● | ● | $\u21d2\u2003qex(2)$ | ||

● | ● | ● | $\u21d2\u2003qex(3)$ | ||

● | ● | ● | $\u21d2\u2003qex(4)$ |

Inputs | $q\gamma 1$ | $q\gamma 2$ | $q\gamma 3$ | $q\gamma 4$ | Extrapolation |
---|---|---|---|---|---|

● | ● | ● | $\u21d2\u2003qex(1)$ | ||

● | ● | ● | $\u21d2\u2003qex(2)$ | ||

● | ● | ● | $\u21d2\u2003qex(3)$ | ||

● | ● | ● | $\u21d2\u2003qex(4)$ |

Here *b* is needed to make the estimator consistent for the parameter of interest. In the case of the parameter with a normal distribution, we need to set *b *=* *1.4826 [16,17].

to produce the final estimate of the true solution. Here $0\u2264W\u22641$ is an arbitrary weighting factor. In this paper we always take *W *=* *2/3, meaning we give more weight to the estimate $qex(4)$ then to the median $q\u0303ex$.

where factor 2 corresponds to the two-standard-deviations confidence interval for normal distributions. It bears emphasis that the estimate (14) does not represent bounds on the true solution, merely a band centered on $qex*$ where the true value can be expected to be contained with a high probability.

#### Estimate of the Convergence Rate.

#### Expert Judgment.

The convergence rates computed from Eq. (15) should be subject to expert judgment. In particular, convergence rates which are unreasonably high should lead the analyst to question both the best guess and the confidence interval (15), as well as the best guess of the solution and its confidence interval (14). However, it would be difficult to incorporate expert judgment a priori, as it is sometimes suggested in fluid dynamics [10]: Numerical models in solid mechanics can work correctly, yet converge at unexpected convergence rates. This was reported also in computational fluid dynamics [18], but in solid mechanics models, there are some additional difficulties: various forms of locking (volumetric, shear, Poisson), both for reduced-dimension-models (beams, shells), and for continuum models [19]. Furthermore, various kinds of singularities will affect the convergence rate of the quantity of interest in the primary domain, even when the singularities are present in the secondary domain (outside of the region of interest) [20]. For example, various kinds of boundary conditions, such as hard simple supports for shells, will produce singular force resultants; clamped boundary conditions introduce stress singularities, which even when not inspected directly may pollute the stress results elsewhere; etc.

## 2 Numerical Examples

*F*is a safety factor, typically taken at the value of 1.25 in well-converging simulations. The second term, the fraction in terms of the computed results, is the normalized approximate error. The last term is a correction factor that converts between the approximate error and the true error. The results are then reported in terms of the GCI as

_{s}where $q\gamma 3$ is the solution on the finest grid. The convergence rate in solid and structural mechanics typically cannot be guessed at the theoretical value, which means that it needs to be computed as discussed above. Reference [12] describes the estimation (17) thus

… the GCI is a dimensionless indicator for the mesh convergence error relative to the finest-zoned solution. Specifically, when multiplied by the finest-zoned solution, it provides the width of an error band, centered on that solution, within which the exact solution is very likely to be contained.

The error-band aspect is consistent with our confidence interval (14). Therefore, in the examples in this section, we compare our approach with the GCI-based estimation.

We will also compare with this alternative estimation, which we will designate by the initials of the authors as BSR.

### 2.1 Stress Concentration in 3D.

The following problem was considered in Ref. [21]: An infinite slab 0.1 m thick with a stress-free circular hole of radius $R1=0.1$ m, under far-field unidirectional tensile loading *P *=* *1.0 MPa, was considered. The Young's modulus was *E *=* *2.4 MPa and Poisson's ratio was $\nu =0.49995$ (i.e., the material was nearly incompressible). Plane-strain conditions were simulated by fixing the faces parallel to the plane of the slab in the direction normal to them. A region of radius $R2=0.4$ m around the stress-free hole was modeled. Owing to the two-plane symmetry of the model, only a quarter of the geometry was considered for analysis.

*σ*and

_{x}*σ*follow from the radial and angular stress solutions and could be found in Ref. [22] as:

_{y}where *r* is the distance from the center of the stress-free hole and *θ* is the anticlockwise angle with respect to the horizontal *X*-axis. These stresses were used to form the traction boundary conditions on the outer cylindrical surface.

The target quantity was the direct stress, *σ _{x}* at point A shown in Fig. 1. From the analytical solution, one can evaluate $\sigma x|A=3P$.

A high-performance mean-strain hexahedral finite element was used. The element was stabilized (in the sense of correcting the rank deficiency) by setting up a stabilization energy that was sampled either with the full quadrature rule or with the mean-strain quadrature [23]. The element is robust with respect to the near incompressibility of the material. Hence, we can hope to realize rate of convergence close to the theoretical one of 1.0.

The problem was solved using six meshes. Thus we were provided with the opportunity of applying the uncertainty quantification as outlined above three times, as we could form three tuples of four sequences of refined meshes. The coarsest mesh is shown in Fig. 1, and successive refinements have been produced by isotropic bisection, followed by projection onto the cylindrical surfaces. Table 2 shows the meshes used, and the computed stress at point *A*.

Mesh | # elements | # nodes | Relative element size γ | Stress σ (MPa)_{x} |
---|---|---|---|---|

0 | 20 | 60 | 1.000 × 10 | 1.96424 |

1 | 160 | 297 | 5.000 × 10^{–1} | 2.35360 |

2 | 1280 | 1785 | 2.500 × 10^{–1} | 2.62957 |

3 | 10,240 | 12177 | 1.250 × 10^{–1} | 2.79966 |

4 | 81,920 | 89505 | 6.250 × 10^{–2} | 2.89550 |

5 | 655,360 | 685377 | 3.125 × 10^{–2} | 2.94656 |

Mesh | # elements | # nodes | Relative element size γ | Stress σ (MPa)_{x} |
---|---|---|---|---|

0 | 20 | 60 | 1.000 × 10 | 1.96424 |

1 | 160 | 297 | 5.000 × 10^{–1} | 2.35360 |

2 | 1280 | 1785 | 2.500 × 10^{–1} | 2.62957 |

3 | 10,240 | 12177 | 1.250 × 10^{–1} | 2.79966 |

4 | 81,920 | 89505 | 6.250 × 10^{–2} | 2.89550 |

5 | 655,360 | 685377 | 3.125 × 10^{–2} | 2.94656 |

Table 3 presents the three extrapolations performed on the four tuples. For instance, the first row shows extrapolation from the meshes 0, 1, 2, and 3: Extrapolation from $1.96424,2.35360,2.62957$ yields 3.30123 (with convergence rate 0.49659), extrapolation from $1.96424,2.62957,2.79966$ results in the estimate 3.11464 (with convergence rate 0.62294), extrapolation from $1.96424,2.35360,2.79966$ yields 3.18620 (with convergence rate 0.55350) and finally extrapolation from $2.35360,2.62957,2.79966$ gives the estimate 3.07290 (with convergence rate 0.69821). The median of the four predictions is $q\u0303ex=3.15042$. Thus the best guess according to Eq. (13) is $(2/3)3.07290+(1/3)3.15042=3.09874$. The details concerning the calculation of the uncertainty and an analogous calculation of the convergence rate and its uncertainty are omitted, we just note that the stress is predicted to be $\sigma x=3.09874\xb10.16798$ MPa.

Meshes | Stress σ (MPa)_{x} | $2\xd7\sigma xmad$ | β | $2\xd7\beta mad$ |
---|---|---|---|---|

0, 1, 2, 3 | 3.09874 | 0.16798 | 0.66154 | 0.12634 |

1, 2, 3, 4 | 3.02547 | 0.04027 | 0.80354 | 0.07961 |

2, 3, 4, 5 | 3.00647 | 0.01078 | 0.89319 | 0.04918 |

Meshes | Stress σ (MPa)_{x} | $2\xd7\sigma xmad$ | β | $2\xd7\beta mad$ |
---|---|---|---|---|

0, 1, 2, 3 | 3.09874 | 0.16798 | 0.66154 | 0.12634 |

1, 2, 3, 4 | 3.02547 | 0.04027 | 0.80354 | 0.07961 |

2, 3, 4, 5 | 3.00647 | 0.01078 | 0.89319 | 0.04918 |

The other two rows of Table 3 are completed analogously. The convergence rate can be seen to increase with refinement from $0.66154\xb10.12634$ to 0.89319±0.04918.

Figure 2 illustrates the uncertainty quantification of the predicted stress $\sigma x|A$ that corresponds to the data in Table 3. The predicted stress is shown with a filled dot, and the uncertainty interval is shown with whiskers. As can be seen, the uncertainty interval includes the exact answer of 3.0 MPa.

Now we present the estimation based on the GCI. Note that the six computed results in Table 2 allow for three extrapolations, but four estimates using the GCI. As shown in Fig. 3, the *extrapolated* value of the stress is above the true value. On the contrary, the *calculated* values of the stress approach the true value from below. The error band of the GCI contains the true solution, but the error band is approximately five times wider than the confidence interval obtained with our approach.

### 2.2 Shell Problem: Scordelis-Lo Roof.

The Scordelis-Lo cylindrical roof is one of the original obstacle-course benchmarks and was discussed at length by Krysl and Chen [24]. The target quantity is the deflection of the midpoint of the free, straight, edge. The robust flat-facet triangular shear-flexible finite element [25] produced the limit value of the deflection $\u22120.30148\u2032$ (in the direction of the load), extrapolated from extremely refined meshes (512 element edges for the finest mesh).

The meshes with a given number of elements per side were used (c.f. Table 4), i.e., a total of seven meshes were used, which made it possible to use the extrapolation procedure with uncertainty quantification for a grand total of four estimates: four tuples of four meshes were available. The results are summarized in Table 5. The estimate in the last row is just 0.02% off of the reference value of $\u22120.30148\u2032$.

Mesh | # elements per side | Deflection (ft) |
---|---|---|

0 | 7 | –0.249830 |

1 | 9 | –0.266567 |

2 | 12 | –0.280147 |

3 | 15 | –0.287211 |

4 | 18 | –0.291278 |

5 | 23 | –0.295003 |

6 | 28 | –0.296972 |

Mesh | # elements per side | Deflection (ft) |
---|---|---|

0 | 7 | –0.249830 |

1 | 9 | –0.266567 |

2 | 12 | –0.280147 |

3 | 15 | –0.287211 |

4 | 18 | –0.291278 |

5 | 23 | –0.295003 |

6 | 28 | –0.296972 |

Meshes | Deflection w (ft) | $2\xd7wmad$ | β | $2\xd7\beta mad$ |
---|---|---|---|---|

0, 1, 2, 3 | –0.305133 | 0.00551 | 1.502 | 0.171 |

1, 2, 3, 4 | –0.302653 | 0.00184 | 1.683 | 0.090 |

2, 3, 4, 5 | –0.301741 | 0.00053 | 1.797 | 0.060 |

3, 4, 5, 6 | –0.301427 | 0.00023 | 1.863 | 0.040 |

Meshes | Deflection w (ft) | $2\xd7wmad$ | β | $2\xd7\beta mad$ |
---|---|---|---|---|

0, 1, 2, 3 | –0.305133 | 0.00551 | 1.502 | 0.171 |

1, 2, 3, 4 | –0.302653 | 0.00184 | 1.683 | 0.090 |

2, 3, 4, 5 | –0.301741 | 0.00053 | 1.797 | 0.060 |

3, 4, 5, 6 | –0.301427 | 0.00023 | 1.863 | 0.040 |

This shell is known to respond to the applied loading with a mixed membrane-bending stress state. Given that both the membrane and bending+shear mechanics of the element will be involved, it isn't clear what the rate of convergence should be. It is interesting to note that the evaluated convergence rate varies noticeably from the coarsest meshes to the finest (refer to Table 5). Clearly, until the estimation procedure begins to work with reasonably fine meshes (let us say mesh 4 and higher), there can be no doubt that the asymptotic range had not been reached. Yet, the estimates of the limit solution and its associated uncertainty are quite reasonable, and the 95% confidence interval includes the reference solution. This is also shown in Fig. 5, which graphically summarizes the estimates of the limit deflection (filled solid dots), and the uncertainty intervals of $\xb12\xd7wmad$.

In order to satisfy the assumptions of the GCI calculation, the solution was recalculated using a sequence of grids related by nesting, i.e., the meshes were generated by bisection in both axial and circumferential direction. Meshes 0 had 4 element edges along each direction. Estimation using the GCI is shown in Fig. 6. Clearly, the error band reflects the distance between the solution on the finest mesh for each triplet and the true solution. The width of the error band is much larger than our confidence interval (Fig. 5).

### 2.3 Bending Moments in Equilateral Triangular Plate.

where *ν* is the Poisson ratio, *q* is the magnitude of the uniform distributed loading, *L* is the length of the side of the triangle. Here we consider a very thin plate (with thickness $0.05\u2009mm$), and the above parameters of $L=500\u2009mm,\u2009\nu =0.3,\u2009q=100\u2009N\xb7mm\u22122$. The target value is the bending moment resultant at the centroid of $Mx,exact=451389\u2009N\xb7mm/mm$.

The problem was solved using the abaqus finite element program [27]. The thin-shell-specialized three node triangular finite element with a discrete Kirchhoff-Love constraint, STRI3, was used with unstructured meshes. Mesh 0 was constructed of elements of size $60\u2009mm$, and subsequent (unstructured) meshes were produced by halving the element size. The moment resultants were evaluated at the node placed at the centroid (refer to Table 6).

Mesh | Relative element size γ_{j} | Moment $Mx,j$ |
---|---|---|

0 | 1.000 | 443750 |

1 | 0.500 | 447393 |

2 | 0.250 | 449892 |

3 | 0.125 | 451133 |

4 | 0.0625 | 451351 |

Mesh | Relative element size γ_{j} | Moment $Mx,j$ |
---|---|---|

0 | 1.000 | 443750 |

1 | 0.500 | 447393 |

2 | 0.250 | 449892 |

3 | 0.125 | 451133 |

4 | 0.0625 | 451351 |

The extrapolation estimate of the true solution from the first four grids resulted as $Mx=452624\xb11832\u2009N\xb7mm/mm$, and from the grids 1, 2, 3, and 4, we obtained $Mx=451446\xb1376\u2009N\xb7mm/mm$. Both estimates contained the analytical solution (refer to Fig. 7). For the convergence rate we obtained $\beta =0.923\xb10.288$, and $\beta =2.198\xb10.868$, respectively. The convergence rate should be theoretically close to 2.0 in the asymptotic range.

## 3 Discussion

The procedure outlined above is predicated on the data being suitable for extrapolation. Essentially that means that the calculated results approach a limit value monotonically, either from below or from above, and the successive approximate errors (i.e., differences of successive solutions) decrease in magnitude with each refinement. If these conditions are not met, the extrapolation formulas cannot be applied, in particular, Eq. (9) will not have a solution.

Recall that we have postulated the availability of four solutions obtained from four meshes. It is possible that the data in this dataset is not suitable for extrapolation for one or two of them for extrapolations. This would result in a reduced-size dataset, which could still be used to calculate the confidence interval (14). However, we should probably be wary of putting too much faith into an estimate based on so little data. An alternative would be to use the methods of the RVA method [10].

Since Ref. 17 does not provide sufficient information to reproduce their calculations, we furnish here examples of computations that arrive at error models in the form of Eq. (4) by optimization approaches. The equilateral plate of Sec. 2.3 will be used as an illustration. The quadratic triangle STRI65 has been shown to be too flexible in thin shell applications [24]. In fact, one might argue that the finite element cannot be relied upon. We illustrate the problems here with the values of the centroid bending moment resultant computed with the STRI65 triangular element starting from mesh 0 of element size $50\u2009mm$, and then halving the element size four times to produce the data in Table 7. Clearly, the element does not converge monotonically, even though the last four results do.

Mesh | Relative element size γ_{j} | Moment $Mx,j$ |
---|---|---|

0 | 1.000 | 450601 |

1 | 0.500 | 454784 |

2 | 0.250 | 453153 |

3 | 0.125 | 451558 |

4 | 0.0625 | 451439 |

Mesh | Relative element size γ_{j} | Moment $Mx,j$ |
---|---|---|

0 | 1.000 | 450601 |

1 | 0.500 | 454784 |

2 | 0.250 | 453153 |

3 | 0.125 | 451558 |

4 | 0.0625 | 451439 |

*f*is an objective function expressing the error of the fit of the model. For instance

for the “two-norm” of the error of the model fit. The nonsmooth optimization problems could be solved with the evolutionary optimization solver widely available in Excel (refer to the Appendix).

Figure 8 shows the data points from the first four rows of Table 7 with markers and the three models with curves: the dashed line is the model for the objective function (23), the dotted line is the model for the objective function (24), and the dash-dotted line is the model for the objective function (25). Further optimization models could be derived by weighting the results on the finer meshes more strongly than the results from the coarse meshes [10].

Figure 9 shows the models for the same objective functions calculated for the entire range of five simulation results. Evidently, the predictions of the limit value are heavily dependent on the formulation of the objective function. Neither model could predict the analytical solution very well.

The finest four meshes produced results that appear to be compatible with extrapolation (refer to Fig. 10). The scatter of the predicted true values of the bending moment is substantial, and again the predictions depend quite strongly on the type of the objective function. However, in this case, our procedure was applicable, and the results were: $Mx=451229\xb11754\u2009N\xb7mm/mm,\u2009\beta =2.977\xb12.210$. While the prediction of the bending moment resultant was spot on, the uncertainty was quite large. That uncertainty was also reflected in the convergence rate, which was quite high, with a very broad confidence interval.

A tentative conclusion may be that it is better to have data suitable for extrapolation and to use the proposed procedure, rather than to have a general optimization procedure for using data unsuitable for extrapolation.

## 4 Conclusions

The Richardson extrapolation procedure was extended to include quantification of the uncertainty of the estimate of the true solution. The estimate of the true solution is computed from a database of extrapolated values extracted from four three-tuples of calculated solutions based on a sequence of four progressively refined meshes. The database can be processed using median statistics to extract the best guess of the solution and the median absolute deviation as a measure of the uncertainty. The estimator produces the best guess of the solution and a confidence interval (or, alternatively, an error band). It was shown here that the confidence interval contained the true solution, at least in our examples, but no bounds can be guaranteed in general. The estimator does not require substantial software infrastructure, as it relies purely on the solution of a single nonlinear equation, easily treated with a handheld calculator. The precondition of the procedure is a monotonically converging sequence of computed solutions. We showed in the discussion section that it may be possible to accommodate data unsuitable for extrapolation with a procedure based on optimization. However, preference should be given to the acquisition of data suitable for extrapolation.

## Acknowledgment

William J. Rider, Sandia National Laboratories, Albuquerque, is thanked for clarifications concerning the paper cited herein. The anonymous reviewers provided valuable comments, and their efforts are sincerely appreciated.

### Appendix: Optimization Solver

Figure 11 shows the setup of the spreadsheet for the optimization model (23). The objective function is defined as MAX(ABS($B$11-C2:C6-$C$11*POWER(B2:B6, $D$11))). Note that the evolutionary solver will not be able to converge without reasonably tight constraints on the “changing variables” in cells B11, C11, D11. While it is reasonably easy to guess the ranges of the true value and the convergence rate, guessing the range of the constant C11 can be tricky.