This paper will provide an overview of how ground water monitoring statistical choices can significantly impact a facility's ground water monitoring costs. For example, a recent study analyzed the long term ground-water monitoring cost impacts of different statistical analysis approaches on over 20 landfills. The study found that the choice of statistical approach can make over a 50% difference in long term monitoring costs. Four key issues in choosing a statistical approach that minimizes monitoring costs were:
- Minimizing retesting because of inappropriate hydrogeologic assumptions;
- Minimizing site-wide false positive rates;
- Minimizing sample size requirements; and
- Maximizing statistical flexibility when data characteristics change.
Usually, the costs of performing statistical analyses range between 10% to 15% and rarely should exceed 20% of the total ground water monitoring costs. Field sampling, analytic laboratory and regulatory reporting costs comprise most of the monitoring costs. For example, analytic laboratory costs for the Subtitle D Appendix 1 constituents required under detection monitoring usually cost between $350 and $400 per well per sample. When sampling and reporting costs are also included, the per well ground water monitoring costs often will climb to $700 or more. If a facility is forced into a retesting situation because an inappropriate statistical test was used, the sampling, lab and reporting costs will often exceed $2000. And if the facility is inappropriately forced into assessment monitoring the analytic laboratory costs alone can easily exceed $1900 per well.
By contrast, with the prudent use of appropriate statistics, the statistical analysis cost for a detection monitoring program should run from $125 to $175 per monitoring well per reporting period and may prevent a facility from being inappropriately forced into retesting or assessment monitoring. Usually, when statistical costs exceed 20% of the total ground water monitoring costs, the cost advantages of specialized ground-water statistical software are being overlooked. For example, EMCON, IT Corporation and Law Environment along with a number of other national and regional solid waste consulting companies use the Sanitas for ground water Statistical Analysis software at many sites to automate the statistical analyses and ensure that the most appropriate statistical tests are being used. This decision support system developed by Intelligent Decision Technologies in Boulder, Colorado is one example of how artificial intelligence software is being used to reduce solid waste management operating costs.
Ironically, at sites which do not have large analytic programs, when statistics comprise less than 10% of the total ground water monitoring costs, this is often an indication that the inappropriate use of statistics has forced a facility into extensive retesting or assessment monitoring which drives the sampling and analytic costs through the roof. There is no single statistical approach for all sites that minimizes ground-water monitoring costs. There are, however, general considerations that apply to most sites. These considerations are summarized in four steps that you can take to gain better control over your monitoring costs.
Appropriate Hydrogeologic Assumptions
The first step in controlling your ground water monitoring costs is to ensure that the hydrogeologic assumptions of your statistics accurately reflect your site's hydrogeology. Many site managers are surprised to find that different statistical tests implicitly make different assumptions about the site hydrogeology and monitoring program. Misunderstanding these implicit assumptions is the greatest cause of skyrocketing ground water monitoring costs.
For example, we have seen sites where interwell statistics indicate a release from the facility when no waste has yet been placed. These and other studies have demonstrated that an intrawell statistical approach is generally more appropriate than an interwell approach when there is evidence of spatial variation in the site's hydrogeology. However, demonstrating to regulators the need for and effectiveness of an intrawell approach can be difficult, especially for sites where the monitoring program began after waste was placed.
Intrawell statistics compare historical data at the compliance well against recent observations from that well. This eliminates the possibility that spatial variation between upgradient and downgradient wells can cause an erroneous conclusion that a release has occurred, but assumes that the historical data at the compliance wells have not been impacted by the facility. The fundamental regulatory concern about the intrawell approach is whether the historical data have been impacted. If so, the historic data do not provide an accurate baseline to detect a future impact. This is a common problem faced by older facilities where the monitoring wells were installed after waste had been placed at the facility. How do they demonstrate that their historical data are 'clean'?
Generally, the facility should first use hydrogeologic information supplemented with statistical evidence to demonstrate that there is significant natural spatial variation in the site's hydrogeology. One statistical approach is to evaluate whether there is significant statistical differences among the upgradient wells. If there are, this is usually evidence of significant spatial variation at the site and therefore it can be reasonably concluded that an interwell approach will make erroneous conclusions about the facility's water quality.
The facility can then screen the historical data at the compliance wells to ensure that only clearly unimpacted data are used to develop each compliance well's background standard. Statistical approaches used to assist in the screening include VOC tests, trend tests and even interwell limit based analyses. This approach has worked for many facilities. It has received regulatory acceptance in a number of states including California and Colorado and has allowed facilities to significantly reduce their ground water monitoring costs by reducing retesting and keeping facilities out of unnecessary assessment monitoring.
Minimizing Site-Wide False Positives
The second step in controlling your ground water monitoring costs is to minimize your site-wide false positives. False positive rates in the original EPA guidance and in most state and Federal regulations are considered only on a test or individual comparison basis. Facilities, however, need to focus on the site-wide false positive rate which is the possibility of finding at least one statistical false positive result in a regulatory reporting period.
Site-wide false positive rates are far higher than the individual test false positive rates and site-wide false positive rates increase with the number of statistical tests being performed. For example, if an interwell statistical test is run on only 10 constituents at a 5% false positive rate per constituent, the site-wide false positive rate will be approximately 40%. Consequently, many facilities have at least a 50% chance of one or more false positives in each reporting period. The site-wide false positive rate is critical because it only takes one finding of a statistically significant difference to move a facility into retesting and/or assessment monitoring.
One approach to minimizing the site-wide false positive rate is to reduce the number of constituents that are statistically analyzed under detection monitoring. Federal Subtitle D regulations do not require that every Appendix I constituent be statistically analyzed. In fact, the EPA regulators who promulgated the EPA guidance recognize that in detection monitoring it may be preferable to statistically analyze a subset of the inorganic and organic Appendix I constituents. The choice of the subset should be based upon prior monitoring results, local hydrogeology and leachate characteristics. Some state regulatory agencies such as the regional water quality boards in California have specified shortened lists of inorganic parameters to be statistically analyzed.
A second approach to reducing site-wide false positive rates for VOC analyses is to use composite analyses. Analyses such as Poisson based limits or the California screening method reduce site-wide false positive rates and are usually far more appropriate for VOCs because of the high proportion of non-detects commonly found in VOC data. Care must be taken, however, in the application of Poisson based limits. A recent EPA review criticized a commonly used formulation for the Poisson limit. This is just one example of how the application of statistics to ground water quality data is continuing to change rapidly as more is learned about the ramifications of using the various statistical tests.
A third approach to reducing site-wide false positive rates is to reduce the false positive rate of the individual tests. Reducing the false positive rate will, however, increase the false negative rate. In such situations, increasing the background sample size can offset the potential increase in the false negative rate. An equation for computing a reduced false positive rate has been developed by California regulators and approved by EPA. Alternatively, power analyses can be performed to justify the use of a reduced false positive rate.
Mazimizing Statistical Power For A Given Sample Size
The third step in controlling your ground water monitoring costs is to maximize statistical power for a given sample size. The power of a statistical test is its ability to detect a 'true' difference or change. There are a number of factors that can affect the power of a test and unfortunately, not all statistical tests have the same power under the same circumstances. A key determinant of power is the statistical sample size (e.g. the number of analytic results). For example, parametric tests usually have more power than nonparametric tests for the same sample size. Thus, a parametric test is often the test of choice for ground water monitoring, especially when sample sizes are limited if the data are normally distributed.
The difficulty in using a parametric test is that ground water quality data often do not fit a normal or log transformed normal distribution when rigorous normality tests such as the Shapiro-Wilk or Shapiro-Francia tests are used. The general response to this situation is to proceed with a parametric analysis which will yield unpredictable results or to move to a nonparametric analysis. The disadvantage of the parametric analysis in such circumstances is that it is impossible to accurately control the power or false positive rate of the test. The disadvantage of the nonparametric analysis is that it has much lower power for a given sample size when compared to the parametric test if the data are normally or transformed normally distributed.
There is one other option that can be employed when the data do not fit a normal or log transformed normal distribution. This option is to utilize a family of transforms identified by Dennis Helsel, one of the US Geological Survey's water quality statistical analysis experts. These transforms, called 'The Ladder Of Powers', significantly increase the possibility that the data can be transformed into a normal distribution and that a parametric analysis can be used. This increases the power of the test for a given false positive rate and sample size. Thus additional sampling, expensive retests, and/or being unnecessarily forced into assessment monitoring can be avoided. Unfortunately, the computations required to perform and evaluate the effectiveness of these transforms are quite extensive and thus using specialized statistical software is often a necessity.
The fourth step in controlling your ground water monitoring costs is to develop a site specific analysis methodology that incorporates the spectrum of possible changes in data characteristics over time. For example, data distributions, percentage of non-detects, or equality of variances can and often do dramatically change as ground water monitoring programs mature. All too often facilities do not plan for these possible changes. Instead of proposing in their permit applications and monitoring plans a decision logic for choosing the most appropriate statistical approach based upon the current characteristics of the data, they propose one statistical test based solely on the limited data available at the time the application was submitted. In turn, when data characteristics do change, and the facility does not adjust its statistical approaches, retesting and assessment monitoring with all their associated increased monitoring costs are highly probable.
Unfortunately, adjusting the statistical approach once the permit has been issued can be costly and may be used as a mechanism by other parties to raise a host of other unrelated issues. The approach we use is to incorporate into permit applications and monitoring plans a decision logic for choosing the most appropriate statistical approach based upon the current characteristics of the data. On a regular basis, the data characteristics are reviewed and the most appropriate statistical test is selected based upon the permit decision logic. While the test may change, so long as the decision logic remains consistent, no permit modification is required. This is a concept that has been accepted by US EPA and numerous state regulatory agencies but again, to cost-effectively implement this type of flexibility, specialized software is often a necessity.
Statistical issues are driving both short and long term monitoring costs at municipal landfills around the nation. Utilizing specialized ground-water statistical analysis software, the costs of performing the statistical analyses should rarely exceed 20% of the total ground water monitoring costs for ongoing monitoring programs. The cost of initial statistical evaluation and permit preparation, however, may often exceed this amount. When knowledgeably applied, statistics can reduce the number of samples required, minimize retesting, and prevent a facility from being unnecessarily forced into assessment monitoring, yet provide a reliable indication of a release. Unfortunately, when statistical tests are inappropriately applied, the statistical findings can result in grossly inflated monitoring costs and yet still provide inaccurate answers.