CALIBRATION AND CONTROL GUIDE II

Calibration an control guide 2

A control sample (or control, simply) is a tool that allows us to verify whether a given measurement has been performed correctly, comparing the result of said measurement with the expected value for said control. In addition, if control is sufficiently similar to the samples, we can extrapolate that they will behave identically, which allows us to assess the quality of the measure through statistical treatments.

WHY SHOULD WE CONTROL?

The purpose of the analysis laboratories is to provide information about the composition of the samples, which, in turn, serves to make decisions. It is important, therefore, that such information is truthful and appropriate to its purpose, so it will be necessary to establish criteria that indicate the degree of reliability of the measures taken.

WHEN SHOULD WE CONTROL?

It should always use to verify that the measurement conditions are still valid. Although maximum certainty is obtained by checking before the analysis starts, along the analysis process and at the end of it, usually is enough to carry out the control at the beginning of the measurements so required corrections can be performed before making the measurements on the samples.

The system must also be controlled every time a new calibration is done, and especially if there is a reagent change involved.

WHAT INFORMATION DOES THE CONTROL PROVIDE?

The repeated measurement of the same control allows obtaining valuable information about the conditions in which our system operates, including accuracy and precision of the procedure.

Information does the control provides 2

Accuracy indicates the proximity between the measured value and the reference value of the sample (values dispersion); precision (also referred as trueness) indicates the degree of proximity of several repeated measures around an average value (bias from target). A precise measurement is not necessarily accurate, and an accurate measurement is not necessarily precise. The results will have an error that will depend on the inaccuracy and a bias that will depend on the precision of our procedure. Both should be kept under control, whether they are unpredictable (random error) or if they are directly related to our measurement procedure (systematic error).

HOW TO INTERPRET THE CONTROL DATA?

The random error is the one that appears due to the variability inherent in the analysis tools. It is inevitable and the goal is to reduce it as much as possible. Possible sources of random error are the lack of homogeneity of the sample, the variability in the dispensation, the oscillations of the light source… Due to its unpredictable nature, the only way to reduce it (as you can not eliminate it) is to monitor potential sources of error and act on them. For the same reason, the way to detect it is by means of successive measurements of the same sample (that is, a control sample): a high random error is observed as a lack of accuracy (i.e, dispersion above the expected values).

The systematic error appears when the values ​​obtained deviate in a specific way from the real value of the sample. In this case, we must look for the nature of this deviation in the measurement procedure. It can be both absolute (for example, an interference introduce a constant bias along all concentrations) or proportional (deviation increase as per analyte concentration). Unlike in the previous case, the systematic error can (and should) be eliminated. A common cause of systematic error is the loss of the calibration status due to the natural deterioration of the reagents over time. In that case, we observe a progressive deviation along time of the value obtained from the control with respect to the expected value to the point that said deviation becomes unacceptable. Another common case of systematic error occurs when calibrator, controls and samples show significant behavioural differences with respect to the reagents (matrix effect). These differences are corrected by assigning a value to the specific calibrator for the reagent to compensate for this effect, which means that exchanging calibrators and controls with reagents from different suppliers is, in general, a bad practice as said compensation is not accounted for.

It is important to note that in both cases it is not the particular measured value of the control material that triggers the alarms since this measure is itself subject to random error, but the analysis of successive measures over time that give us such information. Most systems have specific graphic tools that greatly facilitate the interpretation of control series in a simple way, even without additional statistical or metrological knowledge. Some of the most used are:

Levy-Jennings graphs: They represent the successive measures against their deviation from the expected value, marking limits at 1, 2 and 3 times the standard deviation of the measure. It is very easy to detect when these deviations cease to be random, present trends or are simply much greater than what is statistically acceptable.

Levey-Jennings Graph

CUSUM Charts: Represents each successive measure against the cumulative sum of the deviations from the expected value obtained so far. In an ideally random series, this sum will be close to zero (as negative and positive bias will be compensated), while in a series that shows a trend with oscillations (i.e systematic positive or negative bias), this sum will increase its value.

Cusum chart of data

The proper use of controls is the best guarantee to ensure the accurate and precise results necessary to guarantee the entire winemaking process with guarantees.

For more than 10 years, Sinatech’s commitment to the winemaker has been working side by side to provide the most appropriate analytical solutions to the control and monitoring of the winemaking process. Automated methods easily adaptable to any work routine, with a personalized advisory team to help you quickly and smoothly implement.

Sinatech: TeamWork.