The Triad cost-effectively controls decision uncertainty by targeting the principal components of data uncertainty, including the sampling, analytical and relational uncertainties produced by data collection efforts.
At each step of the characterization and remediation process, important decisions must be made. Primary among these is determining whether a site, or portions of the site, has contamination present at unacceptable levels. Borrowing language from CERCLA, the answer to this question during a site assessment determines whether a remedial investigation is necessary. The answer to this question during a remedial investigation determines whether remediation is required. The answer to this question post-remediation determines whether site closure has been achieved.
Uncertainty is a given with environmental decisions. Uncertainty arises from incomplete or ambiguous information about the current or potential future state of a site. Decision uncertainty leads to potential decision errors. For the primary decision (i.e., does an area have contamination that poses unacceptable human or ecological risks?), mistakes can be made in two ways. The conclusion may be that an area does not pose an unacceptable risk, when in fact it does. Alternatively, the conclusion may be that a site poses a concern and so requires further action, when in fact it does not. The first error can result in missed contamination. The second error can result in the waste of public and/or private resources.
Hazardous waste site decisions are primarily based on the results of environmental measurements. These measurements have traditionally taken the form of laboratory analyses of discrete samples from site media. Advances in measurement systems and analytics have increased the number of options for obtaining analytical data. Systems are now available that can measure the presence of contamination in situ (e.g., X-Ray Fluorescence (XRF) for certain metals), or that can provide rapid analysis of media samples on-site (e.g., immunoassay technologies). In addition, off-site laboratories often provide the option for expedited sample turn-around.
The process of making environmental measurements injects data uncertainty three ways into decision-making. The first is through analytical uncertainty. Analytical uncertainty can come from several sources. One is the variability in measurement results observed from repeated measurements of the same sample. Another is uncertainty caused by inadequate detection or quantitation capabilities relative to the intended use of the data. A third is analytical uncertainty introduced by unrecognized interferences that bias results. Misinterpreting results from non-specific methods might be considered yet another source of uncertainty in analytical data. Analytical uncertainty is reduced through the use of appropriate quality control techniques, selecting and modifying methods so they are more appropriate to the intended data use, and by drawing on appropriate technical expertise.
The second major category of data uncertainty is sampling uncertainty. As with analytical uncertainty, sampling uncertainty can stem from several specific sources, but the root cause is contaminant heterogeneity within environmental matrices. When contaminants are released to the environment and migrate, heterogeneity is created on large and small spatial scales. Contaminated matrix populations (i.e., portions of matrix that were contaminated through similar mechanisms and so tend to have more similar contaminant profiles) may be intertwined or interspersed with non-contaminated media. Even differences in particle size within the same bulk matrix may contribute to contaminant heterogeneity. When relatively few samples are used to characterize a heterogeneous matrix, there is little confidence that contaminant populations are understood well enough to support decision-making. Sampling uncertainty manifests itself when the data user does not know whether the results from 1-gram samples analyzed in the laboratory can be legitimately extrapolated to represent the contaminant concentration for the area from which the samples came. Sampling uncertainty is reduced by collecting more samples from the area of interest, and by concentrating samples in those areas of greatest decision uncertainty.
The last contributor to data uncertainty is relational uncertainty. Relational uncertainty refers to the uncertainty associated with the relationship between the measurement result for a parameter and the true parameter of interest from a decision-making perspective. Relational uncertainty is usually not a significant issue for traditional laboratory techniques, but becomes important for many of the methods that might be used at a site to produce real-time information. This is because these techniques may measure a parameter (e.g., the presence of a class of contaminants) that is only loosely related to the primary parameter of concern (e.g., the concentration of a particular contaminant). As with analytical uncertainty, relational uncertainty can be controlled by proper quality control procedures, or the use of an alternative analytical technique with intrinsically lower relational uncertainty.
For traditional sampling program design, data quality has been synonymous with controlling analytical uncertainty. Analytical uncertainty has been managed through the use of standardized fixed-laboratory procedures, with quality assurance/quality control specified through some form of standardized, auditable program such as the Contract Laboratory Program (CLP). While this approach may produce results of known and verifiable analytical quality, experience has proven that the primary contributor to decision uncertainty for most sites has been sampling uncertainty, not analytical uncertainty. Sampling programs that reduce sampling uncertainty to levels comparable to analytical uncertainty using traditional fixed laboratory techniques are usually prohibitively expensive. One reason for this is that sampling uncertainty in general is inversely proportional to the square root of sample numbers. Consequently, to reduce sampling uncertainty by a factor of ten requires a 100-fold increase in sample numbers.
The Triad approach recognizes that there are a variety of measurement technologies that can produce results useful for decision-making purposes. These technologies range from qualitative, to semi-quantitative, to quantitative methods. While many of these techniques may produce data with higher relational or analytical uncertainties than traditional fixed-laboratory techniques, they do so at greatly reduced per sample costs, allowing many more samples to be collected for the same investment. The higher sample numbers permit better management of sampling uncertainties, better understanding of contaminant populations, and more accurate conceptual site models (CSMs). From a Triad perspective, the best combination of data collection technologies are those that manage sampling, analytical, and relational uncertainties to produce effective data (i.e., data which, taken together, allow a decision to be made confidently) at the least cost. When a combination of methods is used to manage various types of uncertainty in the CSM to support site decision-making, the data sets produced are also referred to as collaborative data sets.
A second problem with traditional sampling programs is that they have relied on sampling and analysis plans that completely pre-specify sample numbers, locations, and the analytics to be used. Sampling programs are required because there is uncertainty about the contamination status of a site. This uncertainty, in turn, makes it impossible to know a priori exactly how many samples are required to provide data sets sufficient to achieve data quality goals, or where to best locate those samples. Invariably traditional hazardous waste site sampling programs yielded surprises. Contamination was encountered where it was not expected. Contamination extended beyond what was assumed. Potential contaminants of concern were identified that were previously not known to exist at a site. In these circumstances, decision-makers are left with the difficult choice of either making decisions with undesirable levels of uncertainty, or mounting yet another sampling program to further resolve those uncertainties.
From a Triad perspective, this is where real-time measurement technologies become important. Real-time measurement technologies yield data quickly enough to influence the progress of data collection. This may mean data that truly are available instantaneously, or it may mean 24 or 48-hour turn-around times for sample analyses. Whatever the case, the use of real-time measurement technologies allows unexpected results, and their implications, to be resolved in the context of the on-going field activities. For example, additional samples could be allocated to bound the vertical or lateral extent of contamination unexpectedly encountered. Biased sampling could be conducted to clarify anomalies identified during data collection work (e.g., stained soils, evidence of stressed vegetation, discovery of evidence of waste disposal, an unexpected high sample result, etc.). Additional QA/QC could be implemented to correct for unexpected data quality problems. The product is a data set that fully satisfies the data quality needs of the decision to be made by the time the data collection program is demobilized.
The potential for making decision errors can never be removed completely. With proper planning, however, the probability of making decision errors can be reduced to levels that are acceptable. The Triad provides an approach for cost-effectively attaining this goal.