Triad is a Federal/State Interagency Partnership
Triad Compatibility with State and Federal Guidelines
In general, the Triad is compatible with most existing state and federal regulations and guidelines.
Generally, the Triad approach is compatible with most existing state and federal regulations and guidelines. A major reason for this compatibility is that the Triad is a second generation approach to environmental decision making that has grown out of the experience of the last several decades during which most significant state and federal cleanup programs have matured. In fact, the Triad can be considered an optimization of the techniques developed in response to those programs. Because most existing environmental remedial programs address potential risk to human health and ecological receptors and are based on determining, assessing, and remediating contaminant sources and migration pathways to receptors, the Triad approach is a valuable unifying concept for effectively and efficiently achieving project goals.
Because regulation supercedes guidance and because the Triad approach is meant to be used within the existing state and federal regulatory framework for environmental documentation, the documentation component of Triad is equivalent to that for projects using more traditional approaches. The Triad approach was developed using the principles of the scientific method, so that when properly applied, the scientific defensibility of Triad projects should be enhanced compared to traditional approaches. However, scientific defensibility and legal defensibility are not synonymous.
The Interstate Technology and Regulatory Council in its "Technical and Regulatory Guidance for the Triad Approach" (ITRC 2003) note that it is a misconception that real-time measurement technologies will not withstand legal scrutiny and cite four criteria set forth by the U.S. Supreme Court. These four include whether:
- "the technique has been valid[ated] and tested,
- the principle of the technology has been subjected to peer review and in publication,
- the rates of potential error associated with the relevant testing are known, and
- the technique has gained general acceptance in the relevant scientific community."
(William Daubert v. Merrell Dow Pharmaceuticals, Inc. United States Supreme Court, 509, U.S. 579, 1993).
The ITRC also notes that the standards for admissibility of scientific evidence in state courts may differ from the federal system.
This section will not attempt to review every state's position on this subject. However, in an examination of admissibility standards for field versus laboratory data as evidence, Simmons (1999) notes that federal or California standards for admissibility of evidence do not distinguish between analyses performed in a laboratory versus those performed in the field. In addition, the standards do not require that USEPA methods be used for analysis.
"In order for data to be accepted as evidence, whether the data come from a fixed laboratory or the field, the technique may need to be generally recognized in the scientific community (California state standard), and must be shown to be relevant and reliable (federal standard). Once evidence has been accepted, the weight which is given to the evidence may depend on a variety of factors, including the training and experience of the personnel, the accuracy of the equipment, and the reliability of the method. The rules for the defensibility of field methods are no different than those for fixed laboratory methods." (Simmons 1999).
Simmons concludes that:
"The rules used by the courts are very different than those established in regulation. In particular, courts have found that evidence may be reliable even if there were major deviations from methods specified in regulation, or if the analysis was done in a non-accredited laboratory, even if accreditation were required by regulation. As to the weight which is put to evidence, the validation of the method and the quality system documentation are certainly relevant." (Simmons 1999).
The implications of these conclusions for projects using the Triad approach are several. First, there is not an explicit restriction on the techniques used to collect evidentiary data as long as the technique is judged as providing reliable data. Second, a robust conceptual site model should be developed in order to demonstrate the relevance of the data. Finally, a solid quality assurance and quality control program should be in place and followed for all data collected that documents method performance.
Legal Defensibility Documents
Chain of Custody
By definition, "chain-of-custody" is "an unbroken trail of accountability that ensures the physical security of samples, data, and records." (USEPA 2001) Traditional environmental quality assurance practice primarily associates the term "chain of custody" with the trail of accountability associated with the journey of physical samples from their in situ locations through the analytical laboratory. However, the concept of a trail of accountability for data and records has always been present in environmental projects so that the results could eventually be of sufficient quality for evidentiary purposes. All projects, regardless of whether they use traditional or Triad approaches should include a rigorous program for assuring that the above definition of "chain of custody" is met. Guidance is available in 1) EPA Requirements for Quality Assurance Project Plans (USEPA 2001) and in 2) Uniform Federal Policy for Implementing Environmental Quality Systems - Evaluating, Assessing, and Documenting Environmental Data Collection/Use and Technology Programs (USEPA 2003).
Potential incompatibility between existing regulations and the Triad approach primarily arises out of those areas where the Triad seeks to improve on the traditional approaches that have developed over the past few decades. A main source of incompatibility stems from the Triad's recognition that contaminant heterogeneity is a major source of uncertainty in environmental decision making and that recent improvements in measurement technologies and greatly improved computational power provide capabilities that simply were not available even a decade ago. Traditional approaches emphasized minimizing analytical uncertainty rather than spatially-related uncertainty because of the simple fact that it was too expensive to collect and analyze data at sufficient spatial densities to significantly portray the heterogeneous distribution of contaminants and media at most environmental sites. The result is that a mature and specific body of regulations exists to describe the requirements and methods for laboratory-based analytical procedures, but not yet for field methods used to collect large data sets rapidly. Regulation, once entrenched, is slow to change.
This concept is best recognized in those regulations that, despite the notion that they are risk-based, seek to specify sample numbers, location, and analytical quality and/or incorporate not-to-exceed-at-any-point cleanup criteria. The contradiction here is clear. Exposure to and harm from environmental contaminants, especially at concentrations specified in most regulations (i.e., relatively low), are functions of contact with widely dispersed contaminants ingested, inhaled or otherwise contacted over long durations. The ability of a relatively few, widely-spaced samples to represent the complexity inherent in the mechanisms that cause contaminants to move along pathways to receptors is clearly very poor.
This particular potential incompatibility between the use of the Triad approach and existing regulation (i.e., prescribed sample numbers, locations, analytical methods and not-to-exceed criteria) is most likely to arise when such regulations are used as ARAR or are otherwise directly enforceable as part of a remedy. Ultimately, the party(ies) responsible for the remediation must demonstrate compliance, but estimates of the effort and cost required to achieve a site condition compatible with compliance are not necessarily sufficiently bounded or classified by the characterization activities specified in the regulations. It should also be noted that many regulatory agencies recognize that even subsequent to application of existing specifications embodied in regulation, areas of residual contamination exceeding the guidelines are often later found. This observation has led many in the regulatory profession to desire a more spatially robust means for identifying and characterizing contamination.
One of the principal reasons the Triad approach has found application is because it directly approaches the uncertainty associated with these issues. By focusing on decision uncertainty, Triad helps project stakeholders develop expressions of decision uncertainty in ways that can be quantified, and hence, managed.
For instance, one way to combine the need for better spatial certainty while maintaining full regulatory compliance is to use spatially dense data sets, often collected with real-time techniques, to define the nature and extent of contamination and using the protocol specified in the regulations as post-remedial closure protocol. In this way, the party responsible for remediation obtains a data set with much less uncertainty related to the magnitude of the problem and the regulators' uncertainty about whether isolated areas of contamination exist is also addressed. Often the metric used to quantify the uncertainty in the decision-making at this point is the uncertainty associated with estimates of the volume of impacted media.
Relationship to the DQO process
A major aspect of the Triad is to continuously develop a CSM in conjunction with systematic planning to determine the type, timing, quantity, and quality of information needed to reduce decision uncertainty to acceptable levels. This aspect of the Triad is a good example of how the Triad has built upon previously developed environmental practice. USEPA's Data Quality Objective Process (USEPA 2000), originally published in 1994, recommends using the Data Quality Objective Process "...to develop Data Quality Objectives [DQOs] that clarify study objectives, define the appropriate type of data, and specify tolerable levels of potential decision errors that will be used as the basis for establishing the quality and quantity of data needed to support decisions."
The USEPA guidance cited above establishes seven steps in the DQO process.
- State the Problem
- Identify the Decision
- Identify the Inputs to the Decision
- Define the Boundaries of the Study
- Develop a Decision Rule
- Specify Tolerable Limits on Decision Errors
- Optimize the Design for Obtaining Data
Partly because of the past practice of emphasizing analytical error over other sources of error, the focus of the DQO process was placed primarily on the placement and analytical quality of sample sets comprised of discrete samples. (It is important to note that this was not the sole intent of the guidance). This emphasis often resulted in sampling plans that attempted to follow a rather awkward "cookbook" application of the seven steps in an attempt to determine the type, timing, quality, and quantity of samples to be collected in a campaign. The term Data Quality Objective was often narrowly defined as an objective principally described in terms of analytical data quality.
The Triad seeks to revive the original concept of the DQO Process, emphasize the use of the scientific method underpinning it, and extend the paradigm to semi-quantitative and even qualitative domains.
The Triad does so by making the working hypothesis in the form of the CSM the central focus of the systematic planning process. The CSM represents the physical realities of the site that govern contaminant nature, extent, and distribution. The testing and perfecting of the CSM hypothesis is the central focus of the dynamic work strategy. By doing so, the role of real-time measurements becomes evident. That is, real-time measurements are used in conjunction with traditional analytical techniques to adaptively test the CSM in as close to real-time as possible, simply because this is the most effective and efficient way to accomplish the work. Decision statements (instead of DQOs) are used to express the objective of the test being undertaken and measurement techniques are used that are most appropriate, efficient, and cost effective to whatever element of the CSM is being tested.