The calibration of water quality testing equipment is as important as the tuning of a musical instrument.
When people find out I’m a chemist, the first thing they ask me is, “Are you Walter White?” (This shows you the kind of people I typically run into). The second thing they ask me is, “Why is calibration so important?”
Simply stated, calibration is how we adjust water monitoring equipment that has started to drift a little. A drift in instrument response over time is inevitable. Small physical changes to the glass surface on a pH sensor, for example, will change the response and result in an inaccurate pH reading. A calibration will offset those changes in your water quality testing equipment.
Calibration is important; we all get that. Effective calibration is vital for getting accurate data. What I mean by effective calibration is this: an instrument should be calibrated with standards that bracket the expected results as closely as possible. For instance, suppose the water I am analyzing has a pH of 5.59. I will get better results if I calibrate a pH sensor using pH 4 and pH 7 standards than if I calibrate the same sensor using pH 2 and pH 10 standards. I would get even better results if I chose pH 5 and pH 6 standards. Of course, in order to accurately bracket expected results, I have to know something about the water beforehand. Initially selecting a broad range of calibration standards can be useful for determining the “ballpark” values of the water. I could then re-calibrate the instrument with a pair of standards that would be a tighter fit. If the water is expected to have a range of values throughout a sampling period, I would adjust the calibration standards accordingly.
When calibrating water quality testing equipment with a single calibration standard, it is imperative to use a standard close in value to that of the water. The farther from the value the standard is, the lower the quality of the data the instrument will provide. What determines how inaccurate the results are is a characteristic called linearity. When a scientist talks about linearity, he/she is referring to an instrument’s response as the concentration changes. Linearity is rarely meant to be an absolute: rather, it is a degree. An instrument can have a high degree of linearity for an analyte, say, chloride, over a particular range. For example, I have an instrument that has very linear behavior between 0 and 250 milligrams of chloride per liter (mg/L) and the water I am sampling just happens to have a concentration around 150 mg/L (these are the joys of living in a hypothetical world). If I calibrate the instrument using 100 mg/L and 200 mg/L, I can feel quite confident of the results.
But not all water monitoring equipment nor analytes have linear behavior, and no instrument is linear over all ranges. Usually there are slight variations from linearity – and these are the problem areas that make effective calibrations so important. If an instrument doesn’t exhibit a linear behavior over a range, the selection of two calibration standards closely bracketing the expected value will usually negate the inaccuracies created by non-linearity.
Perhaps a graphical demonstration is called for. Figure 1 shows an instrument with a high degree of linearity from 0-250 mg/L and Figure 2 represents an instrument with non-linear response across a range of 0-2000. Figure 3 shows the same instrument as Figure 2 but over a much smaller range (500-550) which is much more linear in its response.
In summary, the concentrations and number of points you use to calibrate a piece of equipment will have an impact on your results. Here at In-Situ, we not only make the most accurate water level and water quality instrumentation on the market, but we utilize multi-point calibration procedures across a wide temperature range to ensure accuracy in all conditions. Whether you buy or rent our equipment, you can take comfort in knowing that the data you generate will be top quality