{"id":32,"date":"2024-04-03T23:09:23","date_gmt":"2024-04-03T20:09:23","guid":{"rendered":"https:\/\/sisu.ut.ee\/measurement\/frequently-asked-questions\/"},"modified":"2025-04-24T09:42:52","modified_gmt":"2025-04-24T06:42:52","slug":"frequently-asked-questions","status":"publish","type":"page","link":"https:\/\/sisu.ut.ee\/measurement\/frequently-asked-questions\/","title":{"rendered":"Frequently asked questions"},"content":{"rendered":"<p><strong>0. Please see this paper:\u00a0<\/strong><a title=\"Metrology in chemistry: some questions and answers\" href=\"https:\/\/www.acgpubs.org\/doc\/20201130174210A1-50-JCM-2010-1838.pdf\" target=\"_blank\" rel=\"noopener\" data-url=\"https:\/\/www.acgpubs.org\/doc\/20201130174210A1-50-JCM-2010-1838.pdf\">Leito, I., Helm, I. Metrology in chemistry: some questions and answers. <em>J.Chem.Metrol.<\/em> <strong>2020<\/strong>,\u00a014:2, 83-87<\/a><strong> for a number of questions and answers relevant to practical chemical analysis situations.<\/strong><\/p>\n<hr>\n<p><strong>1. How many decimal places should we leave after comma when presenting\u00a0results?<\/strong><\/p>\n<p>The number of decimals after the comma depends on the order of magnitude of the result and can be very different. It is more appropriate to ask, how many significant digits should be in the uncertainty estimate. This is explained in the video in <a title=\"section 4.5\" href=\"https:\/\/sisu.ut.ee\/measurement\/45-presenting-measurement-results\" target=\"_blank\" rel=\"noopener\" data-url=\"https:\/\/sisu.ut.ee\/measurement\/45-presenting-measurement-results\">section 4.5<\/a>. The number of decimals according to that video is OK for the results, unless there are specific instructions given how many decimals after the point should be presented. When presenting result together with its uncertainty then the number of decimals in the result and in uncertainty must be the same.<\/p>\n<hr>\n<p><strong><span style=\"line-height: 1.6em;\">2. If we need to find standard deviation of those wihtin-lab reproducibility\u00a0measurments, then we need certainly use the pooled one? We can not take the\u00a0simpliest standard deviation, which is calculated by standard deviation formula?<\/span><\/strong><\/p>\n<p><span style=\"line-height: 1.6em;\">The within-lab reproducibility standard deviation <em>s<\/em><sub>RW<\/sub> characterises how well can the measurement procedure reproduce the same results on different days with the same sample. If the sample is not the same (as in this self-test) then if you just calculate the standard deviation of the results then the obtianed standard deviation includes both the reproducibility of the procedure and also the difference between the samples. The difference between the samples is in the case of this self-test much larger than the within-lab reproducibility. So, if you simply calculate the standard deviation over all the results then you will not obtain within-lab reproducibility but rather the variability of analyte concentrations in samples, whith a (small) within-lab reproducibility component added.<\/span><\/p>\n<hr>\n<p><strong>3. In estimation of uncertainty via the modelling approach: When when we can\u00a0use the Kragten approach and when we just use the combination of\u00a0uncertainties?<\/strong><\/p>\n<p>In principle, you can always use the Kragten approach. However, if the relative uncertainties of the input quantities are large, and especially if such a quantity happens to be in the denominator, then the uncertainty found with the Kragten approach can differ from that found using equation 4.11. This is because the Kragten approach is an approximative approach.<\/p>\n<hr>\n<p><strong>4. Exactly what is human factor? I thought that it may be for example person\u2019s psychological conditions and personal experience and so on? This will definitely influence measurement, but is this taken into account then?<\/strong><\/p>\n<p>The \u201chuman factor\u201d is not a strict term. It collectively refers to different sources of uncertainty that are due to the person performing the analysis. These uncertainty sources can either cause random variation of the results or systematic shift (bias). In the table below are some examples, what uncertainty sources can be caused by the \u201chuman factor\u201d.\u00a0In correct measurement uncertainty estimation the \u201chuman factor\u201d will be automatically taken into account if the respective uncertainty sources are taken into account.<\/p>\n<table class=\"table table-hover\" border=\"1\" cellspacing=\"0\" cellpadding=\"0\">\n<tbody>\n<tr>\n<td valign=\"top\">Uncertainty \u00a0 source<\/td>\n<td valign=\"top\">Type<\/td>\n<td valign=\"top\">Taken into \u00a0 account by<\/td>\n<\/tr>\n<tr>\n<td valign=\"top\">Variability of filling a volumetric flask to the \u00a0 mark, variability of filling the pipette to the mark<\/td>\n<td valign=\"top\">Random<\/td>\n<td valign=\"top\">Repeatability of filling the flask\/pipetting<\/td>\n<\/tr>\n<tr>\n<td valign=\"top\">Systematically titrating until indicator is very \u00a0 strongly coloured<\/td>\n<td valign=\"top\">Systematic (causes systematically higher titration results)<\/td>\n<td valign=\"top\">Uncertainty of the titration end-point determination<\/td>\n<\/tr>\n<tr>\n<td valign=\"top\">Systematically grinding the sample for shorter time \u00a0 than should be done, leading to less dispersed sample and lowered recovery<\/td>\n<td valign=\"top\">Systematic<\/td>\n<td valign=\"top\">Uncertainty due to sample preparation (uncertainty \u00a0 due to recovery)<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<hr>\n<p><strong>5. Can we report as V = (10.006 \u00b1 0.016) mL at 95 % CL at coverage factor of 2?<\/strong><\/p>\n<p>We use in this course the conventional rounding rules for uncertainty. Therefore uncertainty \u00b10.0154 ml is rounded to \u00b10.015 ml. Sometimes it is recommended to round uncertainties only upwards (leading in this case to \u00b10.016 ml). However, in the graded test quizes please use the conventional rounding rules.<\/p>\n<hr>\n<p><strong>6. How can I attach my photo into Moodle profile?<\/strong><\/p>\n<p>This is done from your profile in Moodle. Click on your name on the right on the status bar, then click \u201cProfile\u201d, then \u201cEdit profile\u201d.<\/p>\n<hr>\n<p><strong>7. In case of a simple titration, if replicate titrations are carried out then in the uncertainty of pipetting the uncertainty contribution of \u00a0repeatability is omitted.\u00a0Why we ignore the repeatability effect in this case when calculating the result of the titration?<\/strong><\/p>\n<p>In this case we have results of repeated titrations. Their scatter is caused among other effects also by pipetting repeatability. I.e. one of the reasons why different amounts of titrant were consumed in replicate titrations is the fact that the amount of pipetted acidic liquid slightly differed from titration to titration. For this reason, the repeatability of consumed titrant volume automatically takes into account also pipetting repeatability. If we would take it into account in the uncertainty of pipetted volume, we would account for it two times.<\/p>\n<hr>\n<p><strong>8. Can systematic effects really count as uncertainty sources? The GUM says that the recognized systematic effects should be corrected for and the uncertainties of the resulting corrections should be taken into account.<\/strong><\/p>\n<p>Indeed, systematic effects (sources of bias) can often be reduced significantly by determining corrections and applying them. The corrections are never perfect and have uncertainties themselves. However, the resulting uncertainties from corrections will be mostly caused by different random effects.<\/p>\n<p>However, the fact that systematic effects influence measurement results, automatically means that they cause uncertainty and are thus uncertainty sources. Furthermore, although the GUM (<a title=\"https:\/\/www.bipm.org\/en\/publications\/guides\/gum.html\" href=\"https:\/\/www.bipm.org\/en\/publications\/guides\/gum.html\" target=\"_blank\" rel=\"noopener\" data-url=\"https:\/\/www.bipm.org\/en\/publications\/guides\/gum.html\">https:\/\/www.bipm.org\/en\/publications\/guides\/gum.html<\/a>) says that known systematic effects should preferably be corrected, in many cases \u2013 in particular in chemistry and especially at routine lab level \u2013 correcting for the systematic effects is either impossible to do reliably or is not practical, as it would make the measurement much more expensive. It also is often unclear whether a systematic effect exists at all \u2013 in this course we often speak about possible systematic effects. As a conclusion, it is often more practical to include the possible systematic effects as additional uncertainty components, rather than try to correct for all of them. Probably the best practical guide on this issue is the Eurachem leaflet Treatment of observed bias (<a title=\"https:\/\/www.eurachem.org\/index.php\/publications\/leaflets\/bias-trt-01\" href=\"https:\/\/www.eurachem.org\/index.php\/publications\/leaflets\/bias-trt-01\" target=\"_blank\" rel=\"noopener\" data-url=\"https:\/\/www.eurachem.org\/index.php\/publications\/leaflets\/bias-trt-01\">https:\/\/www.eurachem.org\/index.php\/publications\/leaflets\/bias-trt-01<\/a>).<\/p>\n<hr>\n<p><strong>9.\u00a0What is the difference between confidence interval and measurement uncertainty?<\/strong><\/p>\n<p>Measurement uncertainty defines a range (also called interval), around the measured value where the <strong>true value<\/strong> of the measurand lies in with some predefined probability. This interval is called <strong>coverage interval<\/strong> and measurement uncertainty is (usually) its half-width. Coverage interval has to take into account all possible effects that cause uncertainty, i.e.<strong> both to random and systematic effects<\/strong>.<\/p>\n<p><strong>Confidence interval <\/strong>is somewhat similar to coverage interval. It typically refers to some statistical interval estimate. It expresses the level of confidence that the true value of a certain statistical parameter resides within the interval. A typical example is the confidence interval of a <strong>mean value<\/strong> found from a limited number of replicates, which is calculated from the standard deviation of the mean and the respective Student coefficient. The main difference is that we speak only of the mean value, not the true value, and <strong>only random effects <\/strong>are accounted for \u2013 i.e. all replicate measurements can be biased but the confidence interval does not account for that in any way.<\/p>\n<hr>\n<p><strong>10.\u00a0What is the basis for the rule (explained in <a title=\"\" href=\"https:\/\/sisu.ut.ee\/measurement\/45-presenting-measurement-results\" target=\"_blank\" rel=\"noopener\" data-url=\"https:\/\/sisu.ut.ee\/measurement\/45-presenting-measurement-results\">Section 4.5<\/a>) that when the first significant digit of uncertainty is 1 .. 4 then it is presented with 2 significant digits and when it is 5 .. 9 then it is presented with one significant digit?<\/strong><\/p>\n<p>The rationale behind this rule is that the uncertainty should change by less than 10%, relative, when rounding it. If uncertainty would be e.g. 0.15 g then rounding it to 0.2 would change it by 33%. At the same time if it is e.g. 0.55 g then by rounding it to 0.6 g would change it by 9% relative.<\/p>\n<hr>\n<p><strong>11. The true value lies within the uncertainty range with some probability. Therefore, is it OK if it is sometimes outside that range?<\/strong><\/p>\n<p>The situation that the true value is outside the uncertainty range is not impossible, but its probability is low. If it is strongly outside (i.e. far from the uncertainty range) or if it is outside for several measurement results obtained with the same method during a short period then the most probable reason is underestimated uncertainty.<\/p>\n<p>Of course, we (almost) never know the true value, so instead of true values we usually operate with their highly reliable estimates, such as e.g. certified values of certified reference materials.<\/p>\n<hr>\n<p><strong>12.\u00a0Why do we used two-tailed t values in calculating expanded uncertainty, not one-tailed values?<\/strong><\/p>\n<p>One-tailed t values would be justified if we would know for sure that the true value is smaller or larger than our measured value. This is usually not the case and thus it is not justified to use one-tailed values. One-tailed values are also smaller than two-tailed values (for example: ca 1.7 vs ca 2.0, in the case of large number of degrees of freedom and 95% coverage probability), so that the use of one-tailed t values would artificially decrease the uncertainty estimate, possibly leading to underestimated uncertainty.<\/p>\n<hr>\n<p><strong>13.\u00a0When converting from rectangular or triangular distribution to the Normal distribution, where do the rules of dividing by SQRT(3) and SQRT(6) come from?<\/strong><\/p>\n<p>This is clearly beyond the scope of our course. This derivation can be found in specialised books, e.g.: Rein Laaneots, olev Mathiesen An Introduction to Metrology Tallinn University of Technology press, Tallinn, 2006.<\/p>\n<p>Unfortunately I do not have a freely available source in English. There is one in Estonian: <a href=\"https:\/\/sisu.ut.ee\/wp-content\/uploads\/sites\/18\/II_vihik.pdf\" target=\"_blank\" rel=\"noopener\">file_II_vihik<\/a>. The derivation is on pages 12-13. You will probably understand the mathematical equations and you can try to translate the text with Google translator.<\/p>\n<hr>\n<p><strong>14.\u00a0Please explain regarding triangular and rectangular distribution function with some real laboratory examples.\u00a0What is\u00a0the concept that this is triangular and this is rectangular distribution?<\/strong><\/p>\n<p>There are in broad terms two types of situation where rectangular or triangular distribution are used:<\/p>\n<p>\u20131\u2013 When the quantity under question is indeed distributed according to these distributions. In chemistry this occurs first of all in the case of rounding of digital reading. Example: if a thermometer shows 22 \u00b0C then, because of rounding, the value could be anywhere between 21.5 and 22.5 \u00b0C. If rounding uncertainty is the dominant uncertainty component, then we could say that this temperature is distributed according to rectangular distribution. It can be shown that if two rectangularly distributed quantities (with equal uncertainty) are added or subtracted then the resulting quantity is distributed according to triangular distribution.<br>\nThese were examples of situations when these distribution functions are \u201creal\u201d.<\/p>\n<p>\u20132\u2013 It is, however, much more common that these distribution functions are \u201cassumed\u201d or \u201cpostulated\u201d (see\u00a0<a title=\"\" href=\"https:\/\/sisu.ut.ee\/measurement\/34-other-distribution-functions-rectangular-and-triangular-distribution\" target=\"_blank\" rel=\"noopener\" data-url=\"https:\/\/sisu.ut.ee\/measurement\/34-other-distribution-functions-rectangular-and-triangular-distribution\">Section 3.5<\/a>). This need comes whenever you need to use some uncertainty estimate that is presented in the form \u201c\u00b1 X\u201d and we have no knowledge of the underlying distribution of that quantity. In such a case we usually recommend to assume rectangular distribution, as it is safer (lower probability of underestimating uncertainty) than assuming triangular distribution. Examples can be: calibration uncertainties of volumetric ware, uncertainties of purchased standard solution concentrations, uncertainty due to possible interferents (see\u00a0<a title=\"\" href=\"https:\/\/sisu.ut.ee\/measurement\/95-step-5-standard-uncertainties-input-quantities\/\" target=\"_blank\" rel=\"noopener\" data-url=\"http:\/\/sisu.ut.ee\/measurement\/95-step-5-%E2%80%93-standard-uncertainties-input-quantities\">Section 9.5<\/a>), uncertainties of educated guesses\/expert opinions, uncertainties of various systematic effects of measurement instruments, etc. The course materials contain quite some examples on the use of these distributions, as well as self-tests. Please see sections\u00a0<a title=\"\" href=\"https:\/\/sisu.ut.ee\/measurement\/34-other-distribution-functions-rectangular-and-triangular-distribution\" target=\"_blank\" rel=\"noopener\" data-url=\"https:\/\/sisu.ut.ee\/measurement\/34-other-distribution-functions-rectangular-and-triangular-distribution\">3.5<\/a>,\u00a0<a title=\"\" href=\"https:\/\/sisu.ut.ee\/measurement\/41-quantifying-uncertainty-components\" target=\"_blank\" rel=\"noopener\" data-url=\"https:\/\/sisu.ut.ee\/measurement\/41-quantifying-uncertainty-components\">4.1<\/a>,\u00a0<a title=\"\" href=\"https:\/\/sisu.ut.ee\/measurement\/95-step-5-standard-uncertainties-input-quantities\/\" target=\"_blank\" rel=\"noopener\" data-url=\"http:\/\/sisu.ut.ee\/measurement\/95-step-5-%E2%80%93-standard-uncertainties-input-quantities\">9.5<\/a>\u00a0and self-tests\u00a0<a title=\"\" href=\"https:\/\/sisu.ut.ee\/measurement\/self-test-3-5\/\" target=\"_blank\" rel=\"noopener\" data-url=\"https:\/\/sisu.ut.ee\/measurement\/node\/2263\">3.5<\/a>,\u00a0<a title=\"\" href=\"https:\/\/sisu.ut.ee\/measurement\/self-test-9-a\/\" target=\"_blank\" rel=\"noopener\" data-url=\"https:\/\/sisu.ut.ee\/measurement\/node\/1265\">9 A<\/a>,\u00a0<a title=\"\" href=\"https:\/\/sisu.ut.ee\/measurement\/self-test-9-b\/\" target=\"_blank\" rel=\"noopener\" data-url=\"https:\/\/sisu.ut.ee\/measurement\/measurement-1-0\">9 B<\/a>.<\/p>\n<hr>\n<p><strong>15.\u00a0Does failing at even one graded test quiz means total failure (eventhough the rest of the quizzes are successful) and the participant do not\u00a0receive the digital certificate of completion?<\/strong><\/p>\n<p>Exactly, failing one graded test means failing the whole course and not getting the certificate of completion for this edition of the course. But of course, you are welcome to attend again next year.<\/p>\n<p>Failing one test means, that you have not acquired the whole knowledge that you should have acquired form this course and the learning outcomes are not fulfilled. For an analogous example, should you get the driver license, when you know really well how to change the gear, but steering the wheel would be an obstacle for you? Probably not \u2013 for successful driving you need to be able to handle all aspects of controlling the car. It is the same with uncertainty.<\/p>\n<p>Therefore, we\u00a0strongly suggest not to waste attempts. For this, before starting a new attempt, please try with the last dataset to obtain the answer provided by the system and find your mistake.<\/p>\n<hr>\n<p><strong>16. Is it always preferable to use your own calibration data of volumetric instruments?<\/strong><\/p>\n<p>This depends on how high accuracy of volumetric measurement you need.<\/p>\n<p>If high accuracy of volumetric measurements is needed, then it is more correct to calibrate it by herself\/himself. Why? Because the uncertainty of the calibration consists to a large extent of the so-called \u201chuman factor\u201d. So calibration and working manners should be the same. If more people use the same glassware, then everyone should calibrate it for herself\/himself, e.g. person X should not use the pipette with the calibration data, obtained by person Y.<\/p>\n<p>If high accuracy of volumetric measurement is not needed (i.e. if in the used method much more uncertainty comes from other sources than volumetric measurement) then usually the uncertainties assigned to glassware by manufacturers are sufficiently low.<\/p>\n<hr>\n<div style=\"border-bottom: solidwindowtext1.0pt; padding: 0cm0cm1.0pt0cm;\">\n<p><strong>17.\u00a0Is \u201cIncomplete sample matrix decomposition during digestion\u201d a systematic or a random effect?<\/strong><\/p>\n<\/div>\n<p>\u201cIncomplete sample matrix decomposition during digestion\u201d causes a systematic effect. Your result will always be somewhat lower than it should be, because you effectively lose some analyte.<\/p>\n<p>But it is important to add, that the \u201cextent of incompleteness\u201d will almost certainly vary from sample to sample. So, you always get a lower result (and there is a systematic effect), but sometimes it is \u201cmore lower\u201d sometimes \u201cless lower\u201d. This means that there is additionally a random effect \u201csitting on top\u201d of the systematic effect. This is actually quite common that with analyte losses by decomposition or incomplete extraction, etc or analyte addition by contamination and other similar systematic effects there are accompanying (and often quite large) random effects.<\/p>\n<hr>\n<div style=\"border-bottom: solidwindowtext1.0pt; padding: 0cm0cm1.0pt0cm;\">\n<p><strong>18. How can we check the validity of our uncertainty estimates? Can we use <i>s<\/i><sub>RW<\/sub> &lt; <i>u<\/i><sub>c<\/sub> or PT z-score &lt; 2 as criteria?<\/strong><\/p>\n<\/div>\n<p>The best check for the validity of your uncertainty estimate is to compare with an independent result obtained for the same sample. Very common is, e.g. analyzing a CRM and then comparing your result with the reference value of the CRM, e.g. using the zeta score as described in <a href=\"https:\/\/sisu.ut.ee\/measurement\/12-using-measurement-uncertainty-estimates-decision-making\" target=\"_blank\" rel=\"noopener\">Section 12<\/a>. Also, if you participate in a PT then comparing your result with the PT consensus value is useful. In the case of a PT the consensus values usually do not have uncertainty estimates. Then a simple, although not 100% rigorous, approach is to see, whether the consensus value is within the <i>k<\/i> = 2 uncertainty range of your result.<\/p>\n<p>Concerning the two ways proposed by you: Just the fact that <i>s<\/i><sub>RW<\/sub> &lt; <i>u<\/i><sub>c<\/sub> does not say that <i>u<\/i><sub>c<\/sub> has been correctly estimated. It can still be underestimated (or overestimated). And z-scores of PTs do not say anything about the validity of your uncertainty estimate. But of course z-scores are still useful for getting an idea, how similar your measurement result is compared to other laboratories.<\/p>\n<hr>\n<article id=\"p1458264\" aria-describedby=\"post-content-1458264\" aria-labelledby=\"post-header-1458264-5e916f74206965e916f740d21867\" data-post-id=\"1458264\" data-region=\"post\" data-target=\"1458264-target\">\n<div style=\"border-bottom: solidwindowtext1.0pt; padding: 0cm0cm1.0pt0cm;\">\n<p><strong>19.\u00a0<span style=\"line-height: 107%;\">I understand the different sources of uncertainty well in the example, but it strikes me that the standard deviation value is used to calculate the repeatability uncertainty and then re-taken into account in calculating the calibration uncertainty, it is an over estimate of uncertainty?<\/span><\/strong><\/p>\n<\/div>\n<p>Repeatability indeed influences pipetting two times: once when the pipette is calibrated and the second time when actual pipetting is done. So, indeed, it has to be accounted for in both cases.<\/p>\n<\/article>\n<p>However, as you could see, repeatability is taken into account differently. In the case of the actual pipetting you take it into account as the standard deviation of an individual measurement. In the case of calibration \u2013 as standard deviation of the mean. The more individual measurements have been done for a pipette calibration, the more reliable is the correction value. Therefore, also the uncertainty of calibration is smaller: we use the standard deviation of the mean for calibration uncertainty and standard deviation of the mean is dependent on the number of individual measurements. Moreover, calibration uncertainty given by the manufacturer is usually much higher: in our example in <a title=\"Section 4.6\" href=\"https:\/\/sisu.ut.ee\/measurement\/41-naidisulesandeks\/\" target=\"_blank\" rel=\"noopener\" data-url=\"https:\/\/sisu.ut.ee\/measurement\/41-n%C3%A4idis%C3%BClesandeks\">Section 4.6<\/a>\u00a0it is approximately 10 times higher than the one we have obtained.<\/p>\n<hr>\n<div style=\"border-bottom: solidwindowtext1.0pt; padding: 0cm0cm1.0pt0cm;\">\n<p><strong>20.\u00a0I cannot figure out how the standard deviation of <em>b<\/em><sub>1<\/sub> and <em>b<\/em><sub>0<\/sub> are calculated. the solved excel file in section 9.7 has the same formula for the <em>s<\/em>(<em>b<\/em><sub>1<\/sub>) and <em>s<\/em>(<em>b<\/em><sub>0<\/sub>) in all of the cells. \u201c=LINEST(C7:C11,B7:B11,1,1)\u201d<\/strong><\/p>\n<p>The calculation in the original file is carried out with the LINEST function. It is a peculiar function in that it returns a matrix (i.e. a small table of values), not a single value.<\/p>\n<p>Its usage is quite well described in the Excel help. Let me give here just the main steps:<\/p>\n<p>(1)\u00a0\u00a0\u00a0\u00a0Mark the matrix area \u2013 two columns, three rows.<\/p>\n<p>(2)\u00a0\u00a0\u00a0\u00a0Immediately start typing the function, a la \u201c=LINEST(C7:C11,B7:B11,1,1)\u201d (without quotation marks). Instead of commas \u201c,\u201d you may need to use dot-commas \u201c;\u201d as separators, depending on your language settings. C7:C11 stand for analytical signals, B7:B11 stand for concentrations. The \u201c1\u201d and \u201c1\u201d are for not forcing intercept to zero and giving full set of data about the regression.<\/p>\n<p>(3)\u00a0\u00a0\u00a0\u00a0While typing, the typed text will go to just one of the marked 6 cells and this is OK. It is also not important, to which of them.<\/p>\n<p>(4)\u00a0\u00a0\u00a0\u00a0Press CTRL-SHIFT-ENTER. (Not just ENTER!)<\/p>\n<p>(5)\u00a0\u00a0\u00a0\u00a0The sample file uncertainty_of_photometric_nh4_determination_kragten_initial.xls in\u00a0<a title=\"\" href=\"https:\/\/sisu.ut.ee\/measurement\/97-step-7-calculating-combined-standard-uncertainty\/\" target=\"_blank\" rel=\"noopener\" data-url=\"https:\/\/sisu.ut.ee\/measurement\/97-step-7-%E2%80%93-calculating-combined-standard-uncertainty\">Section 9.7<\/a>\u00a0shows, which parameters are in which cells.<\/p>\n<p>Now you have an \u201cautomatic\u201d function which is linked to the calibration data: every time you change something in the calibration data, all regression parameters are immediately recalculated.<\/p>\n<hr>\n<p><strong>21.\u00a0<span style=\"line-height: 107%;\">Does the density of most liquids decrease with temperature? From the context, would the \u201cdensity\u201d parameter refer to the amount (in mass) of the liquid or to its volume that affects the pipette\u2019s delivered volume?<\/span><\/strong><\/p>\n<\/div>\n<div style=\"border-bottom: solidwindowtext1.0pt; padding: 0cm0cm1.0pt0cm;\">\n<p>Yes, in case of most liquids, the density decreases when the temperature increases. The main idea behind \u201cuncertainty of volume due to temperature\u201d is that in almost all practical cases in analytical chemistry liquid volume is defined as volume at 20\u00b0C. I.e., if a 10.00 ml pipette is calibrated at 20 \u00b0C, then when using it, for example, at 25 \u00b0C, the pipetted volume of liquid at 25 \u00b0C is indeed 10.00 ml (in the range of these temperatures the volume of glassware changes so little that it can be neglected), but the amount of liquid (in terms of mass or number of molecules) is smaller compared to the amount pipetted at 20 \u00b0C, although the pipetted volume is the same at both temperatures. So that if the volume of liquid that was 10.00 ml at 20 \u00b0C, would be cooled to 20 \u00b0C, its volume would be 9.99 ml.<br>\nSince temperature differences from 20 \u00b0C are usually small, the changes in density are not very large and therefore, the bias is also relatively small in most situations. Therefore, in most cases we do not have to correct the volume, but we take this small effect into account as a measurement uncertainty component.<\/p>\n<hr>\n<p><strong>22.\u00a0It is still unclear for me, when to use standard deviation of the mean and when to use standard deviation of an individual value in uncertainty evaluation. For example if I have these values in an experiment: 3.2, 3.6, 3.4, 3.0, 3.9 and I calculate the standard deviation and I get a value of 0.349, can I report any individual value as 3.4 and standard deviation of 0.349?<\/strong><\/p>\n<div style=\"border-bottom: solidwindowtext1.0pt; padding: 0cm0cm1.0pt0cm;\">\n<p>This question is explained in <a href=\"https:\/\/sisu.ut.ee\/measurement\/33-standard-deviation-mean\" target=\"_blank\" rel=\"noopener\">Section 3.4<\/a>, but let me try to give some additional explanations here.<\/p>\n<p>The general rule: whenever it is feasible to make replicate measurements of the quantity you are measuring, please do it. And in such cases for the quantity value you should use the mean value and for estimating repeatability you should use the standard deviation of the mean.<\/p>\n<p>Thus, in the example that you are giving, you should certainly report the <i>mean value<\/i>, not a random individual value and as repeatability (assuming you did the measurements on the same day) you should use the <i>standard deviation of the mean<\/i>.<\/p>\n<p>Now, when do you use the standard deviation of an individual value? This is done in such cases, when <i>for your concrete measurement with your concrete object<\/i> you cannot do replicates (or it is not feasible or reasonable). Therefore, you do your measurement just once. And the repeatability of your measurement you estimate from some other experiment that can be repeated.<\/p>\n<p>Two examples:<\/p>\n<p>\u2014 Pipetting. if you need to pipet 10 ml of some solution during your analysis then you cannot do averaging: you cannot pipet 5 times and then somehow \u201caverage\u201d the volume. Instead you do pipetting in the course of that analysis <i>just once<\/i> and you estimate repeatability separately (e.g. by pipetting the same amount of water numerous times). In this case, since <i>you pipetted just once<\/i> in your analysis, you will use <i>standard deviation of an individual result<\/i>.<\/p>\n<p>\u2014 Overall repeatability or within-lab reproducibility of an analysis: If you typically analyze your routine samples without replicates then you can estimate the repeatability (or within-lab reproducibility) separately with some control sample which you analyze several times. If that control sample is sufficiently similar to your routine samples then the obtained standard deviation can also be applied to your routine samples (this approach is, for example, used in the <a href=\"https:\/\/sisu.ut.ee\/measurement\/10-approach-based-validation-and-quality-control-data-top-down-approach\" target=\"_blank\" rel=\"noopener\">Nordtest uncertainty approach<\/a>). Since you <i>analyze your routine samples just once<\/i>, you should use <i>standard deviation of a single analysis<\/i>, not standard deviation of the mean for quantifying repeatability (or within-lab reproducibility).<\/p>\n<hr>\n<div style=\"border-bottom: solidwindowtext1.0pt; padding: 0cm0cm1.0pt0cm;\">\n<p><strong>23.\u00a0How many repetitions we should perform\u00a0for estimating uncertainty due to the non-ideal repeatability?<\/strong><\/p>\n<p>It depends on your setup, needs and possibilities. When using some standard procedure, the number of replicates required may be given in the standard. If your aim is to achieve very low uncertainty level using the mean value and (importantly) if uncertainty due to repeatability is an important uncertainty source, then the more replicates you can do, the better result you obtain.<\/p>\n<p>However, in practice, we cannot usually perform many replicate measurements for several practical reasons: we do not have a sufficient amount of sample, we are limited time and\/or finances etc. Therefore, as soon as you are able to calculate the standard deviation, e.g. already with just 3 measurements, you can have the first rough estimate the repeatability. But do not stop there! You should collect more data, e.g. on similar samples. And you can pool the data using the pooled standard deviation approach.<\/p>\n<hr>\n<p><strong>24. <\/strong><b>What is the uncertainty of a measurement result that is an average of two results of which both have uncertainties<\/b><strong>?<\/strong><\/p>\n<\/div>\n<div style=\"border-bottom: solidwindowtext1.0pt; padding: 0cm0cm1.0pt0cm;\">\n<p style=\"margin-bottom: 5.0pt;\"><span style=\"line-height: normal;\"><span style=\"font-weight: normal;\">This is a more difficult issue than one might think. What is presented here is a simplistic and conservative approach.<\/span><\/span><\/p>\n<p style=\"margin-bottom: 5.0pt;\"><span style=\"line-height: normal;\"><span style=\"font-weight: normal;\">If the individual results are <i>X<\/i><sub>1<\/sub> and <i>X<\/i><sub>2<\/sub> with combined standard uncertainties <i>u<\/i><sub>c<\/sub>(<i>X<\/i><sub>1<\/sub>) and <i>u<\/i><sub>c<\/sub>(<i>X<\/i><sub>2<\/sub>) then, unless the uncertainties are not too different, the value to be presented as the final result can be the simple average of the values <i>X<\/i><sub>1<\/sub> and <i>X<\/i><sub>2<\/sub>:\u00a0 \u00a0<i>X<\/i> = (<i>X<\/i><sub>1<\/sub> + <i>X<\/i><sub>2<\/sub>) \/ 2. If the uncertainties are very different then weighted average should be used whereby 1\/<i>u<\/i><sub>c<\/sub>(<i>X<\/i><sub>1<\/sub>)<sup>2<\/sup> and 1\/<i>u<\/i><sub>c<\/sub>(<i>X<\/i><sub>2<\/sub>)<sup>2<\/sup>are used as weights.<\/span><\/span><\/p>\n<p style=\"margin-bottom: 5.0pt;\"><span style=\"line-height: normal;\"><span style=\"font-weight: normal;\">The combined standard uncertainty <i>u<\/i><sub>c<\/sub>(<i>X<\/i>) can be conservatively estimated as follows: Take the highest of the values <i>X<\/i><sub>1<\/sub> and <i>X<\/i><sub>2<\/sub> and add to it its combined standard uncertainty. What you get is the upper uncertainty limit <i>L<\/i><sub>U<\/sub>. Then take the lowest of the values and subtract from it its combined standard uncertainty. This way you the lower uncertainty limit <i>L<\/i><sub>L<\/sub>. Calculate the distances of the limits from <i>X<\/i>. The larger of the two distances can be used as combined standard uncertainty estimate.<\/span><\/span><\/p>\n<p style=\"margin-bottom: 5.0pt;\"><span style=\"line-height: normal;\"><span style=\"font-weight: normal;\">Example (data with arbitrary units): <i>X<\/i><sub>1<\/sub> = 154; <i>u<\/i><sub>c<\/sub>(<i>X<\/i><sub>1<\/sub>) = 7; <i>X<\/i><sub>2<\/sub> = 160; <i>u<\/i><sub>c<\/sub>(<i>X<\/i><sub>2<\/sub>) = 9. In this case <i>X<\/i> = 157 and <i>u<\/i><sub>c<\/sub>(<i>X<\/i>) = 12.<\/span><\/span><\/p>\n<hr>\n<p style=\"margin-bottom: 5.0pt;\"><strong>25. <\/strong><b>For intermediate precision, if we happen to have an anomalous value due to gross error, is it safe to assume to just omit the value? Or it\u2019s a necessity to perform statistical treatment to justify?<\/b><\/p>\n<div style=\"border-bottom: solidwindowtext1.0pt; padding: 0cm0cm1.0pt0cm;\">\n<p style=\"margin-bottom: 5.0pt;\">Leaving data points out is a very tricky thing. As a very general recommendation: avoid leaving data points out on statistical basis, if you can. However, leave them out if a physical\/chemical reason is found. Just some examples of what could be such reasons: there was a precipitate in the derivatization reagent solution, which has never been there; the slope of the calibration graph on that day was lower than usual; retention time of your analyte differed from what it has usually been.<\/p>\n<hr>\n<p style=\"margin-bottom: 5.0pt;\"><strong>26. It has been said that for determining within-lab reproducibility, a longer timeframe with fewer replicates is typically more preferred than shorter time with more replicates. Since it\u2019s opinion-based, what could be the argument when the other scenario is preferred?<\/strong><\/p>\n<div style=\"border-bottom: solidwindowtext1.0pt; padding: 0cm0cm1.0pt0cm;\">\n<p style=\"margin-bottom: 5.0pt;\">Possible example: 4 data points over 6 months as opposed to 14 data points over 4 months. In this case I would probably prefer the latter.<\/p>\n<hr>\n<p style=\"margin-bottom: 5.0pt;\"><strong>27. For within-lab reproducibility evaluation, it was emphasised that it should be the same sample (stable, homogenous, and available in sufficient amount). But given the case that we don\u2019t have sufficiently large samples and can only do few\u00a0replicates, can we use pooled SD for evaluation of reproducibility?<\/strong><\/p>\n<p style=\"margin-bottom: 5.0pt;\">Indeed, both in the case of repeatability and within-lab reproducibility, for the calculation of a standard deviation it has to be the same sample. But pooling of standard deviations can be done also in the case if the samples are not the same but are similar.<\/p>\n<hr>\n<p style=\"margin-bottom: 5.0pt;\"><strong>28. When you are starting to implement methods, you don\u2019t have the long term data set to produce a viable s<sub>RW<\/sub> analysis. Let\u2019s say your laboratory was hired to do a new analysis, and you\u2019d have a strict time frame to develop and implement this method. You don\u2019t have neither routine samples nor old data, because this is a new method. However, you are required to present uncertainty calculations to your costumer in order to start analysing samples. But you need samples and data to calculate the uncertainty using the Nordtest approach. How to get out of this vicious circle in which you need the uncertainty to start the analysis and need the analysis to calculate the uncertainty?<\/strong><\/p>\n<div style=\"border-bottom: solidwindowtext1.0pt; padding: 0cm0cm1.0pt0cm;\">\n<p>This is a typical \u201cStart with little but do not stop there!\u201d situation in case of single lab validation approach (see also <a title=\"\" href=\"http:\/\/doi.org\/10.25135\/jcm.50.20.10.1838\" target=\"_blank\" rel=\"noopener\" data-url=\"http:\/\/doi.org\/10.25135\/jcm.50.20.10.1838\">I. Leito, I. Helm J. Chem. Metrol. 2020, 14:2, 83-87<\/a>). So, in the very beginning, just one or two weeks of data can be used. It is not good, but is much better than nothing. And as time goes, you get more data and more reliable uncertainty estimates.<\/p>\n<hr>\n<p><strong>29. Aren\u2019t we\u00a0underestimating the actual purity value\u00a0when assuming the middle of the range as most probable value for purity?\u00a0If a minimum is given, shouldn\u2019t we assume the probability distribution leans towards this minimum instead of probability distribution being equally spread between this value and 100%? I\u2019m thinking that, buisness wise, providers may elect to keep their more pure lots to sell as higher purity grade.<\/strong><\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<div style=\"border-bottom: solidwindowtext1.0pt; padding: 0cm0cm1.0pt0cm;\">\n<div style=\"border-bottom: solidwindowtext1.0pt; padding: 0cm0cm1.0pt0cm;\">\n<div style=\"border-bottom: solidwindowtext1.0pt; padding: 0cm0cm1.0pt0cm;\">\n<div style=\"border-bottom: solidwindowtext1.0pt; padding: 0cm0cm1.0pt0cm;\">\n<p>Indeed, if we do not have any other information than, say, \u201cat least 98% purity\u201d then we can only assume what the distribution is and where the actual purity can be. Rectangular distribution covering the whole interval from 98% to 100% is quite conservative\/safe.<\/p>\n<p>Moreover, having spoken to a person familiar with chemical industry I learned that producers typically want \u201cto play it safe\u201d, i.e. they will declare minimum purity as being such that they can safely achieve. This means that a chemical declared as having at least 98% purity can in fact often be 99% or more. But in some batches, it is below 99% and therefore, in order to be safe and avoid any disputes or accusations, they declare \u201cat least 98%\u201d.<\/p>\n<p>In this course we recommend that \u201cat least 98% purity\u201d should be interpreted as (99 \u00b1 1)% and assuming rectangular distribution. This means that the standard uncertainty of purity is 0.58% and the k = 2 expanded uncertainty is 1.15%. This means that the uncertainty range with roughly 95% coverage extends down to 97.85%. I.e. the low-probability situation where purity is slightly below 98% is in fact also covered.<\/p>\n<p>However, importantly, all of the above holds if there is no additional information. Wherever you have more information, e.g. that the actual purity is around 98%, you are welcome to use different estimates for the purity and its uncertainty.<\/p>\n<\/div>\n<div style=\"border-bottom: solidwindowtext1.0pt; padding: 0cm0cm1.0pt0cm;\">\n<div style=\"border-bottom: solidwindowtext1.0pt; padding: 0cm0cm1.0pt0cm;\">\n<hr>\n<\/div>\n<\/div>\n<p><strong>30.\u00a0I don\u2019t understand how the degrees of freedom are found for different types of uncertainty estimates.<\/strong><\/p>\n<div style=\"border-bottom: solidwindowtext1.0pt; padding: 0cm0cm1.0pt0cm;\">\n<div style=\"border-bottom: solidwindowtext1.0pt; padding: 0cm0cm1.0pt0cm;\">\n<div style=\"border-bottom: solidwindowtext1.0pt; padding: 0cm0cm1.0pt0cm;\">\n<div style=\"border-bottom: solidwindowtext1.0pt; padding: 0cm0cm1.0pt0cm;\">\n<p><b>In the case of the B-type uncertainty estimates<\/b>\u00a0the formal number of degrees of freedom is infinity. This applies also to those cases where we do not have clear knowledge about the exact status of uncertainty and assume rectangular distribution.<br>\nIt is not possible to do calculations with infinity. Therefore, people usually pick some number that is large in the context on numbers of replicate measurements. 30 and 50 are quite common and are both OK. You can also pick 37, 48, 61 \u2013 all are OK. Why people typically do not pick e.g. 100\u00a0000 or billions? By picking a \u201crealistic\u201d number (in terms of number of replicate measurements) we introduce a small probability into the calculation of expanded uncertainty that in some rare cases the true value can be slightly outside of the interval of the rectangular distribution.<\/p>\n<p><b>In the case of the A-type uncertainty estimates<\/b>\u00a0the generalized way of interpreting the number of degrees of freedom (<i>df<\/i>) is:\u00a0\u00a0<i>df<\/i>\u00a0=\u00a0<i>n<\/i>\u00a0\u2013\u00a0<i>m<\/i>. Here\u00a0<i>n<\/i>\u00a0is the number of parallel measurements and\u00a0<i>m<\/i>\u00a0is the number of parameters that are obtained from those measurements by data analysis.<br>\nIf you do e.g. a number of titrations and calculate the mean value\u00a0then\u00a0<i>m<\/i>\u00a0= 1, because there is only one parameter value that you are getting: the mean value. If you do linear regression without forcing intercept to zero then\u00a0<i>m<\/i>\u00a0= 2, because you are getting the values of two parameters: the slope and intercept.<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<div style=\"border-bottom: solidwindowtext1.0pt; padding: 0cm0cm1.0pt0cm;\">\n<div style=\"border-bottom: solidwindowtext1.0pt; padding: 0cm0cm1.0pt0cm;\">\n<div style=\"border-bottom: solidwindowtext1.0pt; padding: 0cm0cm1.0pt0cm;\">\n<div style=\"border-bottom: solidwindowtext1.0pt; padding: 0cm0cm1.0pt0cm;\">\n<hr>\n<\/div>\n<\/div>\n<p><strong>31. In the lecture, it was explained, that\u00a0<i id=\"yui_3_17_2_1_1684234340948_36\">RMS<\/i><sub>bias<\/sub>\u00a0is the average bias. However, in the equation 10.3 it\u00a0is calculated by square root of the squared bias divided by n (where\u00a0<em>n<\/em>\u00a0is the number of bias determinations carried out) and therefore is always positive.\u00a0Shouldn\u2019t the\u00a0average bias be found by SUM(bias)\/n so that it can be positive or negative?<\/strong><\/p>\n<p>Indeed, <i>RMS<\/i><sub>bias<\/sub>\u00a0is the average bias. However, the word\u00a0<i>average\u00a0<\/i>here does not mean\u00a0<i>arithmetic mean<\/i>\u00a0(which is expressed by SUM(bias)\/n) but\u00a0<i>root mean square<\/i>\u00a0(RMS, also known as\u00a0<i>Quadratic mean<\/i>). There are also numerous other means (see e.g.\u00a0<a href=\"https:\/\/en.wikipedia.org\/wiki\/Mean\" target=\"_blank\" rel=\"noopener\">https:\/\/en.wikipedia.org\/wiki\/Mean<\/a>).<br>\nMathematically RMS means that we calculate the arithmetic mean of the\u00a0<i>squared<\/i>\u00a0values and then take square root.<\/p>\n<p>There are three reasons why in the case of averaging biases we use RMS and not arithmetic mean:<\/p>\n<p>(1)\u00a0\u00a0\u00a0\u00a0The arithmetic mean of bias can be positive or negative. It also has the important property that positive and negative values cancel each other. Thus, if we have determined two (relative) bias values as -10% and 10% then the arithmetic mean is 0%. At the same their RMS is 10%. Obviously 0% in this case is not an adequate estimate of uncertainty due to possible bias, as our real sample might also have a positive or a negative bias.<\/p>\n<p>(2)\u00a0\u00a0\u00a0\u00a0RMS amplifies the influence of larger (in absolute terms) values, thereby making the\u00a0<i>RMS<\/i><sub>bias<\/sub>\u00a0a more conservative estimate of uncertainty of possible bias, as opposed to just arithmetic mean of biases. As an example, let us assume that two bias determinations gave 2% and 8%. Their arithmetic mean is 5% but RMS is 5.8%.<\/p>\n<p>(3)\u00a0\u00a0\u00a0\u00a0Finally, and most importantly, there is a fundamental mathematical reason, why we almost never add standard deviations, uncertainties, and other similar parameters but instead almost always make all calculations with their\u00a0<i>squares<\/i>\u00a0and then take square root. If you look at the equations used in this course then you see that this is a pervasive situation. The reason is that standard deviations are in mathematical terms not\u00a0<i>additive\u00a0<\/i>(i.e. you are not allowed to add them) but their squares (statisticians prefer calling them\u00a0<i>variances<\/i>) are additive. Thus, from the fundamental mathematical standpoint just adding uncertainties, repeatabilities, biases, etc is incorrect.<\/p>\n<div style=\"border-bottom: solidwindowtext1.0pt; padding: 0cm0cm1.0pt0cm;\">\n<div style=\"border-bottom: solidwindowtext1.0pt; padding: 0cm0cm1.0pt0cm;\">\n<div style=\"border-bottom: solidwindowtext1.0pt; padding: 0cm0cm1.0pt0cm;\">\n<div style=\"border-bottom: solidwindowtext1.0pt; padding: 0cm0cm1.0pt0cm;\">\n<hr>\n<\/div>\n<\/div>\n<p><strong>32.\u00a0<\/strong><strong>How can systematic effects be minimized or even eliminated?<\/strong><\/p>\n<div style=\"border-bottom: solidwindowtext1.0pt; padding: 0cm0cm1.0pt0cm;\">\n<div style=\"border-bottom: solidwindowtext1.0pt; padding: 0cm0cm1.0pt0cm;\">\n<div style=\"border-bottom: solidwindowtext1.0pt; padding: 0cm0cm1.0pt0cm;\">\n<div style=\"border-bottom: solidwindowtext1.0pt; padding: 0cm0cm1.0pt0cm;\">\n<p><strong>Minimization of systematic effect of pipetting<\/strong>. One source of measurement uncertainty in pipetting is calibration uncertainty of the pipette. If we use the tolerance range given by the manufacturer to estimate its contribution, then it is a systematic uncertainty source. When we calibrate the pipette ourselves, we obtain a correction factor that has an uncertainty that is typically several times lower than the tolerance range specified by the manufacturer. So, the systematic uncertainty component gets significantly reduced. It is important to stress that although the uncertainty of the correction factor is mostly caused by the standard deviation of repeated measurement results, the uncertainty of calibration, although significantly reduced, will still act as a systematic uncertainty source in the future measurements. This is because the obtained mean value of the correction factor may be either lower or higher that the \u201ctrue\u201d correction factor and it will be either higher or lower for all future uses of the pipette, thus causing a systematic effect (although small).<\/p>\n<p><strong>Eliminating systematic uncertainty of pipetting altogether<\/strong>. Let us assume that we have a method where during preparing solutions of samples and calibration graph solutions we need to pipet equal amounts of some solution into the solutions of samples and into calibration solutions. In this case, if we use the same pipette for all solutions, the systematic effect will be completely eliminated. This is because the slight systematic over- or under-pipetting in the sample solutions will be exactly compensated by the same slight systematic over- or under-pipetting in the calibration solutions.<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<div style=\"border-bottom: solidwindowtext1.0pt; padding: 0cm0cm1.0pt0cm;\">\n<div style=\"border-bottom: solidwindowtext1.0pt; padding: 0cm0cm1.0pt0cm;\">\n<div style=\"border-bottom: solidwindowtext1.0pt; padding: 0cm0cm1.0pt0cm;\">\n<div style=\"border-bottom: solidwindowtext1.0pt; padding: 0cm0cm1.0pt0cm;\">\n<hr>\n<\/div>\n<\/div>\n<p><strong>33. <\/strong><strong>How much does the mathematical treatment lose efficiency when there are many procedural steps?<\/strong><\/p>\n<p>Mathematical treatment as such does not lose its efficiency. The problem is rather than if there are many steps in the procedure then it is often difficult (1) to identify all the uncertainty contributions of the steps and\/or (2) to realistically evaluate their magnitudes. So, the problem is rather in the input data for the mathematics than in the mathematics itself. In the case of complex methods (especially if low levels of analytes are determined in difficult matrices) it can easily happen that some important uncertainty source is not recognised or its effect is underestimated, especially if the analyst has limited experience. If that happens then underestimated uncertainty is obtained. In the case of complex analysis procedures the single-lab validation approach (the \u201cNordtest\u201d approach) presented in <a href=\"https:\/\/sisu.ut.ee\/measurement\/10-approach-based-validation-and-quality-control-data-top-down-approach\/\" target=\"_blank\" rel=\"noopener\">Section 10<\/a> is a safer option.<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<hr>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>0. Please see this paper:\u00a0Leito, I., Helm, I. Metrology in chemistry: some questions and answers. J.Chem.Metrol. 2020,\u00a014:2, 83-87 for a number of questions and answers relevant to practical chemical analysis situations. 1. How many decimal places should we leave after &#8230;<\/p>\n","protected":false},"author":14,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_acf_changed":false,"inline_featured_image":false,"footnotes":""},"class_list":["post-32","page","type-page","status-publish","hentry"],"acf":[],"_links":{"self":[{"href":"https:\/\/sisu.ut.ee\/measurement\/wp-json\/wp\/v2\/pages\/32","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/sisu.ut.ee\/measurement\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/sisu.ut.ee\/measurement\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/sisu.ut.ee\/measurement\/wp-json\/wp\/v2\/users\/14"}],"replies":[{"embeddable":true,"href":"https:\/\/sisu.ut.ee\/measurement\/wp-json\/wp\/v2\/comments?post=32"}],"version-history":[{"count":10,"href":"https:\/\/sisu.ut.ee\/measurement\/wp-json\/wp\/v2\/pages\/32\/revisions"}],"predecessor-version":[{"id":876,"href":"https:\/\/sisu.ut.ee\/measurement\/wp-json\/wp\/v2\/pages\/32\/revisions\/876"}],"wp:attachment":[{"href":"https:\/\/sisu.ut.ee\/measurement\/wp-json\/wp\/v2\/media?parent=32"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}