1. What is the Triad approach?
The Triad approach is a next-generation framework for economically managing project decision uncertainties at hazardous waste site cleanups, including both the characterization and remediation phases. A critical Triad product is a conceptual site model accurate enough to support productive, cost-effective decisions about contaminant presence, receptor exposure, and risk reduction design. The Triad has three primary components, systematic planning, dynamic work strategies, and real-time measurements systems. The Triad approach draws on technical knowledge and expertise gained over the past 20 to 30 years of hazardous waste site cleanup, incorporating advancing science into site management policies and practices. The Triad explicitly recognizes that environmental matrices are heterogeneous in composition and contamination is heterogeneous in distribution. The Triad approach copes with the complexities caused by heterogeneity by: 1) using project-specific conceptual site models to distinguish different contaminant populations with respect to project decisions, 2) incorporating a second-generation model for environmental data quality, and 3) exploiting new characterization, analytical tools, and work strategies to expedite and improve site decision-making.
The full-benefits of the Triad are realized when systematic planning is combined with dynamic work strategies and real-time measurement systems. There may be instances where dynamic work strategies are not possible or appropriate. In these cases, portions of the Triad can still be implemented with significant benefits.
2. Who should use the Triad approach?
Project managers and their technical staff responsible for characterization and cleanup activities at potentially contaminated sites and known hazardous waste sites should consider using the Triad approach. Federal, state, and local agencies responsible for the oversight of cleanup activities should also consider promoting Triad concepts.
3. Why should I consider using the Triad for my project?
The Triad approach offers significant benefits to most sites undergoing characterization and remediation. These include the potential for reduced characterization costs, expedited schedules, enhanced stakeholder participation, and improved site decision-making. Improved site decision-making, in turn, can lead to remedial actions that are more effective (i.e., lower possibility of contamination being inadvertently missed) and efficient (i.e., quicker implementation, reduced total site characterization and remediation costs).
4. When should the Triad be applied?
Systematic project planning should always be applied when tackling characterization and cleanup projects at hazardous waste sites. Dynamic work strategies can also be used throughout the characterization and remediation process, but offer the most benefit during the design and implementation of cleanup actions. Real-time measurement technologies can also be implemented throughout all portions of the cleanup process, from initial investigation through remediation and into long term monitoring.
While the greatest benefits accrue from implementing the Triad as early as possible in the characterization and cleanup process, there are also benefits produced by integrating the Triad into ongoing characterization or remediation activities. For example, although a traditional approach may have been taken during the early stages of site characterization, the Triad may be implemented during further characterization activities, or to expedite and improve site remediation.
5. Are there any situations where I should not use the Triad?
Components of the Triad can be used to improve the characterization and remediation process in almost every situation. For example, the Triad’s goal of managing decision uncertainty despite the difficulties of environmental heterogeneity is important for every project. Systematic planning, along with the development of a conceptual site model to understand contaminant distribution and transport, is also fundamental to any defensible project.
There are site-specific situations where the Triad may not be appropriate or cost-effective. Use of the Triad (i.e., systematic planning, dynamic work strategies, and real-time measurement technologies) presumes that stakeholders accept a Triad approach. In highly controversial or litigious situations, the needed level of cooperation may be difficult to achieve. Dynamic work strategies require cost-effective real-time measurement technologies with performance characteristics suitable for decision goals. For many common contaminants of concern (e.g., radionuclides, lead, PCBs, explosives, volatile organics) there are well-established technology options. For more esoteric or unusual contaminants, an off-the-shelf real-time measurement technology may not be available. Finally, there may be site-specific obstacles to deploying particular real-time techniques (e.g., the presence of underground utilities that complicate non-intrusive geophysics, or subsurface cobble zones that prevent the use of sensors deployed on direct push platforms).
Many sites have multiple contaminants, some of which may be amenable to real-time measurement technologies while others may not. This can potentially but not necessarily complicate a Triad approach. In many cases an indicator compound may be present that is amenable to real-time measurement and that can be used to support decision-making, allowing full use of the Triad. If the decisions to be made and contaminants present require significant fixed laboratory analyses, some of the benefits of implementing real-time technologies may be reduced.
6. Is the Triad equally applicable to soil, surface water, groundwater, and sediment issues?
Conceptually, a Triad approach is equally applicable to any setting where contaminated environmental media is present that requires characterization and potential remediation. The more heterogeneous the media and contaminants, the more valuable the Triad will be to avoid collecting non-representative and misleading data sets. Most real-time measurement technologies apply to specific media types and specific contaminants or classes of contaminants within these media types. For individual sites, media characteristics and decision requirements will need to be evaluated against available measurement technology options to determine site-specific appropriateness and applicability.
In practice, implementation of components of the Triad may also be constrained by applicable state or Federal regulations. Examples of these types of constraints include state guidelines that may specify the number, types of samples, and the analytical methods required to demonstrate compliance with cleanup standards at particular classes of sites (e.g., surface water analytical requirements promulgated by the Federal government as part of the Clean Water Act). However, even in those instances portions of the Triad can be used to improve the efficiency and effectiveness of project planning and implementation.
7. How will the Triad affect my project schedule and budget?
Over the life-cycle of a project, implementation of a Triad approach should result in expedited project schedules and overall project cost reductions. However, for particular activities, a Triad approach may require additional expenditures of time and/or resources initially to realize overall project benefits. For example, systematic planning is one of the three underpinnings of the Triad. While short-cuts in the systematic planning process (e.g., selecting a standard analysis technique without review for site-specific appropriateness) may result in immediate project cost-savings and schedule acceleration, this will be at the expense of overall project quality, schedule, and budget. Conversely, up-front investments in demonstrations of methods applicability early in the characterization and remediation process to gain regulatory acceptance for a particular set of real-time measurement technologies may introduce additional cost initially, but produce significant cost-savings and schedule compression for the project as a whole as it moves through characterization to remediation.
8. What are some examples of actual sites where the Triad has been used?
While the term "Triad" has been coined relatively recently by the EPA, the Triad approach reflects significant positive experience gained in the last decade through applying systematic planning, dynamic work strategies, and real-time measurement technologies to hazardous waste site cleanup. The three components of the Triad have been successfully deployed at private and federal sites across a wide range of contamination scenarios, and at all points in the characterization and remediation process. For specific examples, please refer to the Case Studies portion of the Triad Resource Center web site.
9. How does the Triad differ from what I would traditionally do under CERCLA or RCRA?
The Triad deviates from work conducted traditionally under CERCLA or as part of a RCRA corrective action in two important ways. The first is a reliance on real-time measurement technologies to build a more detailed conceptual site model and provide cost-effective and timely information to the characterization and remediation process. The second is the use of dynamic work strategies that leverage real-time data to keep work activities as focused and efficient as possible. In practice this means that the number of samples collected, their location, the types of analyses used, the level of QA/QC performed, the course of remediation, etc. are open to modification under the Triad as work proceeds in response to conditions that are encountered and real-time data that are generated.
In contrast, traditional CERCLA and RCRA corrective action activities have been very prescriptive in nature and are largely defined by the time work is undertaken in the field. The consequence has been little or no flexibility to address conditions that are different than what was assumed during activity planning. The third leg of the Triad, systematic planning, has always been a cornerstone of good project management under CERCLA and RCRA since the mid 1990's, but receives special attention under the Triad approach because of its importance for managing decision uncertainty.
10. Is the Triad another tool in my kit, or a whole new toolbox? Do I use it in addition to or as a substitute for standard techniques?
The Triad offers a whole new toolbox for approaching decision-making and associated data collection efforts during hazardous waste site characterization and remediation. The use of a Triad approach will likely include standard techniques to some degree as appropriate, but will do this within the overall goal of managing decision-uncertainty so that decisions can be made with acceptable levels of confidence. Although many of the individual project activities are the same or very similar as before, the order in which they are performed may be different under the Triad.
11. What makes the Triad different from past efforts to improve the CERCLA process (the Observational Approach, Expedited Site Characterization (ESC), Data Quality Objectives (DQO), the Streamlined Approach for Environmental Remediation (SAFER), Superfund Accelerated Cleanup Program (SACM), etc.)? Is the Triad here to stay?
The Triad builds on experiences gained from prior efforts to improve the CERCLA and RCRA corrective action processes begun in the 1980s. The Triad is a natural technical practice progression in response to advances in science and technology. This progression has been in concert with policy development driven by economics and a maturing regulatory environment. The Triad embraces techniques and concepts used by the Observational Approach and Expedited Site Characterization to manage environmental decision-making in the presence of uncertainty. The Triad reflects EPA’s efforts to organize past successful modifications to the standard CERCLA and RCRA corrective action process into a coherent, replicable, technically defensible approach for managing the uncertainty associated with hazardous waste site decision-making.
In the past decade there have been significant technological and methodological advancements in analytical methods and measurement systems, as well as in supporting technologies such as Global Positioning Systems and direct-push sampling techniques. Some of these advancements have found their way into SW-846 guidance and day-to-day field practices and are here to stay. Their availability is what makes the approach advocated by the Triad possible. There are already ample case studies that demonstrate the potential benefits of weaving real-time measurement systems into the characterization and remediation process. The expectation is that as practitioners, regulators, technology providers, and project managers become comfortable with a Triad approach, the use of Triad components (e.g., systematic planning, dynamic work strategies, and real-time measurement systems) will become standard practice.
12. Where in the cleanup process can I use the Triad? In the different phases of a RCRA corrective action or CERCLA project (e.g., initial assessment, site characterization, cleanup, closure), where will the Triad save money and where will it cost more compared to traditional methods?
The Triad has application at every point in the CERCLA and RCRA corrective action process. The Triad has the greatest potential for cost savings in the latter portions of the process, during remedial design and implementation. However, implementation of the Triad early on in the cleanup process greatly increases the opportunity for fully integrating the approach into later stages. The use of the Triad will potentially increase costs early in the process due to the potential need for site-specific demonstrations of method applicability for analytical techniques that are not standard. However, these costs will be more than balanced by the benefits of more confident site decision-making and cost savings realized during characterization and remediation. In addition, the use of the Triad early in the process can significantly compress schedules during later stages of the cleanup process.
13. What is systematic planning?
Systematic project planning is the most important and universally applicable element of the Triad approach. Triad systematic planning is a common sense approach used to identify the decision that needs to be made to reach project goals, determine the uncertainty associated with the decision based on current site understanding, and develop methods for managing that uncertainty so that decisions can be made with acceptable levels of confidence. For most sites, managing uncertainty is synonymous with collecting more information (e.g., samples and data), which is where dynamic work strategies and real-time measurement technologies become important. Developing a preliminary conceptual site model that will be progressively refined as the project proceeds is a critical component of systematic planning. There are a number of existing systematic planning frameworks that can be used to implement systematic planning under the Triad, including the EPA's Data Quality Objectives process and the USACE's Technical Project Planning process.
Practitioners should be aware that Triad systematic planning tends to be more intensive than the planning many are accustomed to. Triad systematic planning involves asking all parties to negotiate and reach consensus on the goals of the project before significant resources are invested in field work. A key way to manage decision uncertainty is to make sure everyone is clear about exactly what decisions the project is expected to resolve. Face-to-face meetings of project participants are the most efficient mechanism to talk through issues and reach consensus about the desired project outcome and related tolerances on decision uncertainties. That consensus will serve as the foundation for all subsequent agreements about how complete the conceptual site model must be by the end of the project, which will in turn govern the design of sampling and analysis programs. Achieving this kind of agreement early in the project, before significant investment is made in field work, can require considerable expenditure of time and energy. Having all parties commit to this level of systematic planning is one of the major hurdles in implementing Triad projects.
14. What is uncertainty management?
In general, the Triad uses the term "uncertainty" to refer to things that are unknown to a greater or lesser degree. Uncertainty management begins with obtaining clarity among all involved parties about the goals and decisions for a particular project. Management of decision uncertainty depends on managing the uncertainty in the conceptual site model that will be used to support decision-making, usually through the collection of environmental data. Uncertainty in data generation is managed by controlling all relevant sampling and analytical factors that introduce confounding variability into data results.
Every cleanup decision has some uncertainty associated with it due to an inherent incomplete understanding of actual site conditions. For example, there is always the possibility that some unit of soil that is contaminated above cleanup guidelines will be missed by characterization and remediation work. Conversely, experience has shown that remedial activities often inadvertently capture and treat material that probably already met cleanup criteria. Uncertainty management is the attempt to manage or control decision-making uncertainty so that decisions can be made with an acceptable level of confidence. Uncertainty management usually includes collecting additional information about the contamination status of a site or portions of a site so that unacceptable uncertainty can be resolved. This is where dynamic work strategies and real-time measurement systems become important. The Triad provides an efficient and effective means for managing decision-making uncertainty through the affordable gathering of additional information.
Under Triad, uncertainty can be managed qualitatively through professional judgment or a weight of evidence approach, or through quantitative means such as statistical hypothesis testing. The level of decision confidence to be achieved, and how that confidence will be demonstrated, is determined during systematic planning with the input, and hopefully consensus, of project participants.
15. What is a conceptual site model?
A conceptual site model (CSM) is a representation of the relationships among key site features that pertain to the decisions that need to be made. CSMs can take many forms, from a written description of contaminant release and transport mechanisms, to simple schematics, to complicated 3-dimensional computer models of contaminant state, fate, and transport. The form and complexity of a CSM is determined by the significance of the decision to made (i.e., the implications of being wrong) and the level of site understanding required to make that decision. Using CSMs makes projects more cost-effective because a CSM delineates the contaminant populations for which exposure and remediation decisions are different, avoiding inappropriate classification of on-site media. A CSM is by nature evolutionary, changing and becoming more accurate as additional information is gathered pertinent to the site and its conditions. The Triad approach provides the mechanism for performing this updating process while data collection is underway. CSMs are critical components of the Triad systematic planning process. They capture all key site information pertinent to the decisions that need to be made, and allow an explicit evaluation of the uncertainty associated with those decisions. If that uncertainty is considered unacceptable, the CSM assists in developing uncertainty management approaches, including identifying key data gaps that, if filled, would allow decision-making to proceed at acceptable levels of confidence.
16. What is the connection between the Triad and the Technical Project Planning approach developed by the Army Corps of Engineers?
The U.S. Army Corps of Engineers (USACE) developed the Technical Project Planning (TPP) process to improve planning activities associated with hazardous, toxic, and radioactive waste (HTRW) site cleanup. The TPP process is an example of a Triad-consistent systematic planning process that involves four different phases of planning activities. The TPP process is meant to be initiated at the start of activities associated with a HTRW site and continue through the life-cycle of cleanup. The expectation is that the application of the TPP process will ensure that the requisite type, quality, and quantity of information are obtained to satisfy project objectives.
The TPP manual is available at the following URL: http://www.usace.army.mil/inet/usace-docs/eng-manuals/em200-1-2/toc.htm
17. What is a dynamic work strategy?
Dynamic work strategies are the product of systematic planning. Dynamic work strategies are strategies that allow work activities to change or adapt in real-time to new information that becomes available during the course of the activity. Dynamic work strategies require a source of timely information to be effective, which is why real-time measurement technologies are important for the Triad. Dynamic work strategies are usually embodied by "if-then" statements that provide guidance on how work should proceed based on information gathered, and that are captured in appropriate project documentation such as a sampling and analysis plan, quality assurance project plan, or similar planning documents.
Dynamic work strategies have wide application in the hazardous waste site characterization and remediation process. They can be the basis of sampling and analysis plans that are implemented as adaptive sampling and analysis programs. They can be built into overall quality assurance (QA) and quality control (QC) efforts, leading to focused QA/QC programs. They can be incorporated into remediation designs, enabling precision excavation techniques for contaminated soils and sediments. They can also be important components of long-term monitoring programs.
18. Can I use real-time measurements and field data to support CERCLA risk assessments? How?
Because of the significance of CERCLA risk assessments for site decision-making, data must be of sufficient quality to meet risk assessment needs. From an analytical perspective, this means that detection limits should be below levels that would be of concern, that the method has sufficient specificity and selectivity to provide results for the contaminants of concern, that sampling is representative of the exposure unit under consideration, and that QA/QC protocols are sufficient to support analytical quality. These requirements are usually met by using standard fixed-laboratory methods. However, in many cases the same instrumentation that is used at the fixed laboratory can be used in or near the field (e.g., a mobile laboratory) to support dynamic work strategies. In these situations, with the proper QA/QC protocols, data produced by these methods may be as good as or better than what a fixed laboratory would produce.
In other cases, the measurement techniques used to produce real-time measurements may not be of the quality associated with fixed laboratory techniques because of potential interference issues, lack of specificity or selectivity, insufficient detection capabilities, etc. While these data may not be useful for directly supporting risk calculations, they can still be important to the overall quality of the risk assessment. For example, they can provide a much greater density of information about the spatial distribution of contamination and the presence of localized, highly elevated areas of special concern. They can be used to assist in the selection of sampling locations for more definitive analyses via standard fixed-laboratory techniques. They can also be used to verify assumptions about the conceptual site model that may be critical to the risk assessment (e.g., the vertical distribution of contamination in a soil or sediment profile). In other words, they can support development of the CSM that determines the representativeness of the high analytical quality data points, which are invariably taken at spatial densities too low to define the spatial scales of contaminant heterogeneity. The CSM developed from these less rigorous measurements is critical to support correct interpretation of low-density fixed laboratory data.
19. What is the difference between screening quality data and decision quality data, and how are each used in the Triad approach?
Within the context of the Triad’s second-generation data quality model, "data quality" is assessed according to the data’s ability to support confident decision-making. "Decision quality data" are defined within the Triad as data that can support the decision to be made at the desired level of confidence. Decision quality data sets often are comprised of data from different sources (e.g., a smaller number of analyses from standard fixed laboratory methods supplemented with a larger number of real-time measurements) so that all aspects of data uncertainty (sampling, analytical, and relational) are managed. The term "collaborative data" is used in the context of the Triad to express this.
Following the same line of reasoning, "screening quality data" are data that provide some useful information for the decision to be made, but taken alone are not sufficient to make the decision at the desired level of confidence. Screening quality data are data of known quality, however, there is either excessive analytical, sampling, or relational uncertainty present with respect to the decision to which the data are being applied. Excessive analytical uncertainty can include detection limits that are too high, or analytical bias or imprecision that is too great. Excessive sampling uncertainty occurs when there are insufficient data points to support conclusions about important population parameters, such as whether the mean concentration is below cleanup guidelines, whether hot spot concerns are absent, or the actual boundaries of spatially patterned contamination. Excessive relational uncertainty exists when the interpretation of measurement results is ambiguous with respect to decision-making. Both standard fixed laboratory analyses and field deployable technologies can produce decision quality data or screening quality data, depending on the level of analytical, sampling, and relational uncertainty present with respect to the decision.
Generally, screening quality data will be used in conjunction with other data with the goal that the combined information will manage sampling, analytical, and relational uncertainties (i.e., the overall uncertainty) in the site data and the CSM constructed from that data. For sampling uncertainty, once there is some understanding of what the target analyte list might be, less expensive methods could be used to map the area for those analytes. Higher analytical quality data with lower reporting limits or less bias may be needed to control the analytical uncertainty. Specific areas may need to be sampled and analyzed with higher quality methods where relational uncertainty is unacceptable. The outcome is a collaborative data set that represents decision quality data and confidence in the CSM. Within the Triad approach, decision quality data are used ultimately to support decisions that need to be made. Less expensive, real-time measurement technologies, if used alone, often produce screening analytical quality. Expensive fixed lab methods, if used alone, usually produce screening sampling quality or screening data representativeness.
20. What guidance is there for developing work plans and documentation consistent with the Triad? Is there training available?
The Triad Resource Center provides information about developing work plans and documentation consistent with the Triad. In addition, as the Triad becomes main-stream, the EPA expects that documentation and guidance will be prepared by other organizations and agencies to facilitate the acceptance and implementation of a Triad approach. As an example, the New Jersey Department of Environmental Protection is currently developing guidance for sites in that state that are considering deployment of the Triad. The Interstate Technical Regulatory Council (ITRC) has released a working document intended to help ITRC-member states who are considering the use of the Triad. In addition, there are resources available through EPA’s Clu-In web site (http://www.clu-in.org) and through EPA’s listing of courses available to staff of EPA and other federal, state, tribal, and local governmental agencies involved in hazardous waste management and remediation (see The Training Exchange website at http://trainex.org).
Live Internet-based, on-line training is also periodically available as announced through the EPA Technology Innovation Program’s TechDirect monthly list serve/newsletter and on the Clu-In website (in the "Studio" section at http://cluin.org/studio/seminar.cfm). Users can sign up to receive the electronic TechDirect newsletter by visiting the Clu-In website at http://cluin.org/newsletters/. Recorded archives of past live Internet seminar deliveries are accessible anytime through this URL: http://cluin.org/live/archive.cfm.
21. What are the different uses of real-time measurements and dynamic work strategies within remedial design and implementation?
Real-time measurement systems and dynamic work strategies can perform a number of roles during remedial design and implementation. A common issue in the CERCLA site remediation process is that data generated as part of the remedial investigation are insufficient for remedial design purposes. For example, there may still be significant lingering uncertainty about the actual extent of contamination that must be addressed by the remedial action. This uncertainty is not surprising since often the list of contaminants of concern and their cleanup requirements are not finalized until after the remedial investigation is complete. Dynamic work strategies combined with real-time measurement systems can be used to fill those data gaps before remediation begins.
Dynamic work strategies and real-time measurements can change the nature of the remediation itself. For example, real-time measurement systems can be built into a remedial action to guide its course. This can be particularly effective for supporting remediation of subsurface soils or sediments. Dynamic work strategies and real-time measurement systems can be used to adjust groundwater remediation systems that involve active interventions (e.g., modifying injection or extraction rates). Dynamic work strategies and real-time measurements can be used to support waste profiling during remediation, or to fine-tune process control for a destructive remediation technology, or to provide on-the-fly waste acceptance criteria testing. Finally, dynamic work strategies and real-time measurements can be used to determine whether cleanup has been successful while work is underway, minimizing the possibility that remediation teams have to be remobilized to address lingering residual contamination concerns identified during closure data collection.
22. How do I use real-time data to modify my work activities? What tools are available to help me do this?
The way real-time data are used to modify work activities during characterization or remediation depends on the decisions that need to be made and often is site-specific in nature. The "if-then" logic that specifies how work activities will be modified based on real-time data reflects the dynamic work strategy, is an outcome of systematic planning, and is ultimately captured in appropriate planning documentation. The Triad User Experiences section of the Triad Resource Center web site provides examples of dynamic work strategies used at sites, and the methods employed to support real-time decision-making.
23. What is the connection between the Triad approach and an "observational approach" to site characterization?
The observational approach has its roots in geotechnical engineering, and the recognition that geotechnical engineering decision-making often is confronted by significant uncertainties arising from an incomplete site understanding. The heterogeneity of real-world geological systems makes it imperative to manage uncertainty at the scale of individual geotechnical decisions. Likewise, the Triad approach recognizes that chemical contaminants are distributed heterogeneously (sometimes with, and sometimes without, clear spatial patterning) at hazardous waste sites. Therefore all data gathering, characterization, and remediation strategies must start from that assumption. As with the observational approach, the Triad relies on real-time decision-making as a way to manage decision uncertainty caused by heterogeneity in a cost-efficient way. Modifying data collection while work is underway permits refinement of site conceptual models and adjustments to designs and work activities to match actual site conditions as they are discovered and understood.
24. What are real-time measurement technologies?
Within the Triad, real-time measurement technologies are technologies that provide and manage results quickly enough to affect the course of ongoing field work. The Triad does not try to define "real-time" in terms of the absolute number of seconds or minutes required for the result to be generated, because such a distinction is necessarily arbitrary. Also, as technologies evolve, the expectations of data users for what is considered "real-time" are also evolving. Finally, since the Triad grounds all concepts in the goal of confident decisions, use of the term "real-time" is also grounded in what is required to support project decision-making while work is underway. One outcome of this definition is that a particular measurement or analytical technology may produce results that are "real-time" for the purposes of one field activity (e.g., site characterization), but not for another (e.g., remediation support).
Since Triad terminology defines "real-time" in terms of supporting on-site decision-making, and not in terms of minutes for result turn-around, the term "near real-time" is not typically used in the context of the Triad. Since "real-time" can be measured in days, hours, minutes, or can truly be instantaneous, technologies considered "real-time" within the Triad context encompass a wide range of tools used to evolve the conceptual site model simultaneously with field work. Other environmental practitioners may have equally justifiable reasons to define the term "real-time" differently than Triad usage, and to make distinctions between the terms "real-time" and "near real-time". As with all environmental terminology for which standardized usage does not exist, participants in a particular project should clarify their terminology usage within a specific project to facilitate non-ambiguous communications and consensus.
Although real-time measurement technologies are usually associated with field-based measurement techniques, standard fixed-laboratory methods that have an expedited turnaround are also included by the Triad approach in this broad category of real-time analytical options. The added cost for quick turn-around from a fixed lab can be cost-effective if it leads to more efficient field activities that produce greater savings.
The term "real-time measurement technologies" also includes a variety of rapid turnaround geophysical, geochemical, geotechnical, and global positioning system (GPS) techniques, as well as a range of tools for managing data in real-time. Real-time data management tools usually involve software (and the hardware needed to run them). Not all of the tools need be physically present on-site if provision is made to transfer data electronically between the field and office-based systems. Databases, graphical programs, and statistical algorithms facilitate real-time data storage, retrieval, quality review, display, reduction (i.e., calculations), mapping, and sharing between field and office and among data users. Computer-based data manipulation technologies are critical when large numbers of data points must be rapidly and reliably handled in real-time. Software tools that are used to statistically manipulate or graphically present data rapidly enough to support data users in their decision-making process are frequently termed "decision support software (DSS) tools." The integration of data gathering, processing, and interpretation technologies into an efficient, intercommunicating network can be called a "real-time measurement system."
25. What are field-based measurement technologies?
Field-based measurement technologies are a subset of "real-time measurement technologies," so the terms are not exactly interchangeable. The term commonly refers to measurement techniques that can be deployed on-site during the course of a characterization or remediation program to generate analytical data. A number of terms are equivalent, such as field analytics, field analytical methods (sometimes abbreviated as "FAMs"), on-site analysis, and others. They all convey the idea that analysis is being performed at or near the location where the environmental samples were collected, as opposed to samples being shipped off to a distant laboratory. In contrast, a distant laboratory may provide data results rapidly enough to support in-field decision-making, and so qualify as "real-time measurements" under the Triad, but would not be considered field-based. "Field-based" denotes a level of hardware robustness and mobility that is different from what one would expect with standard fixed-laboratory measurement systems. Field-based methods have advanced tremendously over the past 5 to 10 years in response to evolving capabilities in lasers, electronics, molecular biology, computerization, microfabrication, and other fields.
Field-based measurement technologies cover a wide range of technical options, including systems capable of in situ measurements, systems that perform ex situ measurements on sampled media in the field, and systems that can be deployed in an on-site, mobile laboratory. In contrast to the rather rudimentary capabilities of ten years ago, field analytical methods now provide analytical performance that spans a wide range of quality and utility. Some field methods provide the non-specific, qualitative, or semi-quantitative analyses (often collectively called "screening analyses") traditionally associated with field analysis. However, a growing number of field techniques are able to provide the quantitative, analyte-specific analyses typically associated with standard fixed-laboratory techniques. With the appropriate technology and QA/QC protocols, some field-based systems can provide data that are of the same or even better quality than traditional fixed-laboratory analyses.
The cost of field-based measurement technologies varies: field data are often much cheaper per data point than fixed-laboratory counterparts, but occasionally per-sample cost may be more expensive than standard fixed-laboratory analyses. In that case, the utility of the field method derives from its ability to save even greater resources because real-time decision-making improves the efficiency of expensive field crews and their equipment. Balancing costs against benefits for sampling and analytical options is an integral part of systematic planning, such as Step 7 of the DQO process.
26. How did the current concepts about "data quality" develop?
Waste programs depend on analytical methods to detect and quantify low or trace contaminant concentrations in very complex matrices, such as soils, sediments, waste materials, and natural waters. No such technologies existed early in EPA’s history. They were developed and standardized in the 1970s and 1980s. It was expected then that site data would be more reliable if analytical procedures were more sophisticated, and their operation was standardized. This concept became codified through EPA guidance, state regulations, and lab certification programs that laid down strict requirements for which analytical methods were acceptable and how they were implemented. "Data quality" was defined in terms of these strict guidelines. "Definitive" and "screening" levels of data quality were defined according to the rigor of the analytical method and associated QC.
Although this first-generation data quality model made sense at the time, practitioners discovered that it had fatal flaws. One critical oversimplification is the fact that contaminants are heterogeneously distributed at both smaller and larger spatial scales throughout environmental matrices. Spatial contaminant heterogeneity may be random or be strongly patterned by pollutant release and migration mechanisms. In either case, the existence of spatial heterogeneity makes it difficult to reliably extrapolate the results of tiny 1- or 2-gram analytical samples back to the tons of matrix from which the samples came. Expensive analyses accurate to 2 decimal places on 1-gram samples are not particularly useful if two 1-gram samples from the same sample jar or only 1 foot apart in the field are orders of magnitude different. Heterogeneity also complicates the design of analytical methods: variations in composition and particle size can alter the efficiency of sample preparation procedures. Acceptable performance of a standardized method on idealized matrices such as clean sand and reagent water cannot be presumed to predict equivalent method performance on real-world samples. Enough project experience has accumulated for the environmental community to recognize that generating and interpreting pollutant data is more complicated than originally expected. The first-generation data quality model does not accommodate real-world variability well enough to support efficient projects.
27. What is the "second-generation data quality model" that the Triad relies on?
The Triad approach builds on real-world field, laboratory, and regulatory experiences to construct a second-generation data quality model based on two fundamental principles:
From these basic principles emerges the need to:
The second-generation data quality model recognizes that perfectly "accurate" analytical results for tiny samples can be misleading if sampling was not dense enough to manage spatial heterogeneity. Standard fixed laboratory methods, at least as currently deployed, are generally too expensive to permit the higher sampling density needed to manage all sources of decision uncertainty. Screening analytical methods and many field-deployable methods can provide higher sampling densities at an affordable cost. In addition, these methods usually provide real-time results, allowing field work to adapt to the realities encountered. Therefore, these tools can play vital roles in developing the conceptual site model and supporting project decisions. With less expensive real-time methods available to manage sampling uncertainties and guide data collection, standard fixed lab analysis can continue to play its traditional role, but on samples selected specifically for their ability to be representative of the intended decision or to clarify ambiguities present real-time measurements.
28. What is the difference between screening analytical methods and more rigorous analytical methods?
The designation "screening method" has less to do with analytical chemistry, and more to do with the marketplace for analytical services. One can only discuss "screening analytical methods" if there is more than one method to analyze for the same target compounds. Analytical methods are compared by assessing a number of factors including analyte list, precision, bias, susceptibility to interferences, detection and quantification capability, quantification range, costs (a function of the required equipment and reagents, labor requirements, throughput, etc.), and so on. When more than one method exists to analyze for a particular target compound or element, one method will tend to excel in one aspect of method performance, but may be worse in other aspects. Naturally, a very important factor is cost. If method options all cost the same, there would be no incentive to use methods that tend to perform more poorly, and they would disappear from the marketplace. However, the cost advantages of some methods counter-balance their poorer analytical performance, especially for applications where better performance is not necessary, or where the interferences that cause poorer performance are not expected to be present. The marketplace retains these methods because they are often faster, "cheaper," and/or easier to use, but will often label them as "screening methods" to distinguish them from higher cost, but better performing methods.
To an analytical chemist, all methods (included regulator-approved methods) have limitations in precision, bias, detection limits, specificity, and interferences. There is no sharp line between screening methods and more rigorous methods, only a continuum of individual performance characteristics and practicality. If allowed to use professional judgment, an analytical chemist would choose the best method for a particular application after balancing the individual performance characteristics of the method (especially the potential for interferences in environmental applications) and costs against the need for information reliable enough to support intended decision-making. Whether or not the marketplace considered that method to be "screening" would not enter into the chemist’s decision to use that method. The chemist would simply decide whether the performance and efficiency of the method was most appropriate for the intended data use, factoring in the budgetary, time, and other logistical constraints of the project.
29. Why does the Triad distinguish between "analytical quality" and "data quality"?
Under the Triad approach, the data’s ability to support the intended data use determines whether data quality is acceptable or unacceptable. This definition for "data quality" is consistent with recent EPA quality guidance. Since the data used to make project decisions are produced from environmental samples coming from heterogeneous matrices, the concept of "data quality" must include not just an assessment of analytical performance against data use, but also an expression of the sampling "performance" or sample representativeness. In other words, the Triad views data quality as an integration of both analytical quality and "sampling quality" (i.e., sample representativeness). If only the performance of the analytical method is under discussion, then the term "analytical quality" should be used to avoid confusion. In a Triad context, discussion of data quality must also include explicit discussion of sample representativeness in the context of the conceptual site model as it relates to the intended decision. It is not appropriate to use the term "data quality" if analytical performance is the only aspect of data quality being considered.
30. How does Triad deal with sample representativeness?
The Triad handles sample representativeness by building a conceptual site model (CSM), and then using the CSM to guide additional data collection to fill remaining data gaps and further evolve the CSM. The completeness and accuracy required in the final CSM is governed by what is required to make project decisions. Building and refining the CSM and demonstrating sample representativeness is an iterative process. Dynamic work strategies combined with real-time measurement systems allow much of this iteration to be performed cost-effectively and efficiently during the course of field activities.
Sample representativeness begins in systematic planning. The history of the site, any information concerning contaminant release and distribution mechanisms, along with data from historical or current environmental sampling are used to build and refine the conceptual model. Once the CSM is understood well enough to understand contaminant populations (especially their spatial or temporal boundaries), additional sample collection can be designed so that its results are representative of the property important for decision-making. For example, the property of interest for a risk assessment might be the average concentration of cadmium in soils that a worker might be exposed to during excavation.
The second-generation data quality model used by the Triad approach expects that a number of sampling-related variables will need to be considered and addressed based on the heterogeneity and spatial/temporal distribution of contamination predicted by the CSM. These variables include sample support, sampling design, sample preservation, sample homogenization, and subsample support. These variables are important because the way samples are collected from heterogeneous matrices affects analytical results, irrespective of the analytical method. Controlling these variables is vital for solid samples, such as soils, wastes, sediments, etc. It is also critically important for groundwater collection from wells, and can be important for surface waters and ambient air monitoring as well. It is impossible to control for these variables unless one first has a clear understanding of what decisions will be made with the data, mandating that systematic planning be done. It is also impossible to predict and control these variables without a CSM for how the contamination may be distributed throughout the environmental media in question.
31. What are "decision quality data" and "effective data" as defined under the Triad approach?
These two terms refer to data of known quality that are effective for making the intended decisions because sampling, analytical, and relational uncertainties have been managed to the degree necessary to meet specified goals for decision confidence. Decision quality data can be cost-effectively provided in a number of ways, depending on the decision.
Early in a project, decisions will revolve around developing the conceptual site model (CSM) and testing alternative hypotheses about what contaminants are present, how they got there, and how they are distributed. Broad spectrum fixed laboratory methods may be useful screening tools to evaluate the list of potential contaminants of concern, but data users should realize that the realities of full-suite analysis means that analytical quality will not be the same for each analyte on the list. Further workup is generally required to resolve any analytical inconsistencies and to understand sample representativeness. For example, are the PCBs detected by full-suite SVOC analysis in a couple of samples really a contaminant of concern for the site? Is TCE in the groundwater caused by on-site sources or by migration under the site from off-site sources? The data needed to confidently resolve these types of questions seldom require the lowest possible detection limits or the best analytical precision. Dense sampling (or sampling in the most informative locations) using relatively imprecise field techniques can provide definitive evidence of wide-spread vs. inconsequential contamination or on-site vs. off-site sources. Even if not of "gold-plated" analytical quality, data quality is "acceptable" (i.e., of decision quality) if both sample representativeness and analytical quality are "good enough" to support the decision being made. The field method alone may be able to provide sufficient analytical quality along with sufficient sampling density to produce data effective for making the decision.
On the other hand, most projects hope to eventually achieve clean closure. Regulatory confidence in clean closure decisions and a "no further action" letter requires 1) good delineation of contaminant populations (i.e., a confident CSM) along with 2) rigorous analytical quality that provides the needed quantification limits for specific target analytes. Because of the cost, taking enough samples to get good delineation using fixed laboratory samples is usually cost-prohibitive, so delineation should be accomplished using a less expensive field method. Once the field method confirms a confident CSM, selected samples of known representativeness can be sent for rigorous analysis to demonstrate regulatory compliance. For this kind of decision, decision quality data are only achieved through collaboration between both types of analytical techniques: the field method develops the CSM and manages sampling uncertainties, while the lab method manages remaining analytical and relational uncertainty.
32. How is "screening quality data" defined under the Triad approach?
Since the Triad approach grounds data quality in decision-making, "screening quality data" are defined as data that provide some information useful to the project (perhaps by helping to refine the conceptual site model (CSM)), but not enough information to be used alone to support decision-making at the desired level of confidence. Thus fixed laboratory analyses can produce screening quality data if too few samples were collected to support a confident CSM so that the representativeness of isolated lab results is in doubt (i.e., there is excessive sampling uncertainty). Fixed laboratory results can also be of screening quality if the detection limit is elevated above the action level as a result of interferences (i.e., excessive analytical uncertainty). Results from screening analytical methods may be dense enough to support a confident CSM, but may be only screening quality if detection limits (or some other aspect of analytical performance) are inadequate to support stringent data uses (i.e., excessive analytical uncertainty).
33. What is "data of unknown quality"?
Data are of unknown quality when the data user lacks critical information needed to guide interpretation of the data with respect to the intended decision. These data do not even rise to the level of "screening quality data" (which requires that the quality of data be known, i.e., individual performance parameters such has reporting limits, precision, bias, representativeness, etc.). This unfortunate situation sometimes occurs when data (field-generated or fixed-laboratory) are reported with no supporting QC and procedural information. There is no way for the data user to know whether proper sampling and analytical procedures were followed or not, or whether site-specific interferences caused bias or not. For example, data may be reported to the data user as simply a table of numbers or non-detects with no documentation about sample collection and preparation procedures, analytical method procedures, sample-specific detection limits, analytical precision, analytical bias, evaluation of interferences, locational information, etc. This is unacceptable and the data are unusable. The data user should never be asked to accept data results on "faith" and is fully justified in rejecting such data.
Legacy data sets are often common examples of data of unknown quality when pertinent QA/QC information either was never reported, or is no longer available. While likely unusable for decision-making purposes without further evaluation, these data can potentially be used in a qualitative fashion to support CSM development. For example, such data sets may identify contaminants of concern or give an indication of the magnitude of concentrations present. In some cases the value of legacy data sets can be verified by re-analyzing a subset of archived samples (if available) to provide points of comparison with data sets of known quality.
34. Why is the term "field screening" discouraged under Triad?
Although Triad practitioners may use the term "screening analytical method" to refer to methods with certain characteristics (such as more imprecise, not analyte-specific, higher detection limits, and more biased than available fixed lab methods), they avoid the term "field screening" because this term fosters a number of misconceptions in the environmental community. The term "field screening" is rooted in the first-generation data quality model where the rigor of the method was considered the only determinant of data quality. Issues associated with using this term include the following:
35. How are real-time measurement systems related to the EPA Contract Laboratory Program and to the Superfund analytical method classification system?
The EPA Contract Laboratory Program (CLP) refers to a program developed by EPA to support analytical contracting for the Superfund program. The types of real-time measurement systems employed by the Triad can include methods identical to those standard techniques commonly accessed through the CLP, particularly if appropriately equipped field laboratories are employed.
As part of the Superfund program, an analytical method classification system has been used in the past that categorizes data sources by their "quality". Real-time measurement systems can include analytical techniques that span the range of this classification system. Under the Triad, "quality" has a different meaning than used by this classification system. For the Triad, data quality refers primarily to its ability to support decisions that need to be made, rather than simply the level of analytical uncertainty associated with a particular measurement technique. The distinction is important, because the Superfund classification system can be misleading when evaluating the potential value of a particular analytical or measurement technique as an input to a particular decision.
36. What is the relationship between real-time measurements systems used by the Triad and SW-846 protocols?
EPA’s SW-846 methods manual contains methods and protocols for solid and hazardous waste analyses recognized by the EPA RCRA program. For methods to be included in SW-846, they must be supported by a defined validation program, typically involving precision, accuracy, ruggedness and other defined method parameter studies. Inclusion of methods in SW-846 implies that there has been sufficient experience with the method and that its results will have a relatively predictable level of analytical quality. While most of the SW-846 methods describe fixed-laboratory techniques, some of the real-time measurement systems commonly used by the Triad (e.g., XRF and immunoassay kits for a variety of analytes) have been included in SW-846 for years.
Two important points must be kept in mind. First, just because a particular method is contained in SW-846 does not mean that it is applicable without modification, or even most appropriate, for the analytical needs of a particular site. Second, just because a method is not contained in SW-846 does not necessarily mean that it is inferior to SW-846 methods in the context of the needs of a particular site. SW-846 methods, particularly those with real-time capabilities, are a useful point of departure when initially reviewing analytical methods for a site since they have already attained a certain level of acceptability and recognition within the regulatory community, and are likely to be readily available from analytical service providers.
37. What real-time measurement and/or field measurement techniques are currently available?
There truly is a wide range of options for providing real-time measurement support. The list is continuously growing. There are several resources that present useful information for project managers and technical staff as they search for appropriate techniques. These include a listing of technologies maintained at http://fate.clu-in.org and http://www.frtr.gov/site/.
38. Can I still use a Triad approach if an appropriate real-time technique is not available for my site (i.e., contaminants are not detectable at action levels by real-time techniques)?
Even if a site cannot make full use of the Triad (e.g., an appropriate real-time technique is not available or cost-effective), components of the Triad can still provide significant benefit. For example, the use of a systematic planning process will have value even if a dynamic work strategy and real-time measurement techniques are not employed. Similarly, focusing on collaborative data sets (e.g., data sets that contain some higher quality standard laboratory analyses supplemented by a larger number of lower-cost but less definitive data) can greatly improve overall decision quality while at the same time reducing characterization costs, whether adaptive sampling programs are used or not. Even if available field methods cannot reach regulatory action levels, they may still be useful in developing the conceptual site model (understanding source areas and migration pathways and the degree of heterogeneity). An important point to remember is that for some contaminants of concern, even commonly accepted fixed-laboratory methods have difficult attaining desired detection capabilities.
39. What QA/QC requirements apply when using a Triad approach? How are these different from what I would do otherwise?
While Triad QA/QC details are distinctly different than those associated with traditional CERCLA or RCRA activities, the role of QA/QC is the same. In both cases, appropriate QA/QC assures that data are of sufficient quality to support decisions that need to be made at the level of confidence required.
Triad QA/QC programs differ from traditional CERCLA and RCRA programs in three key ways. First, some of the real-time analytical techniques employed by the Triad are typically non-standard, and as such may require additional attention to make sure appropriate QA/QC has been defined. Second, real-time analytical techniques employed by the Triad may not be immediately accepted by regulatory agencies, or may raise questions about site-specific performance capabilities. In these settings, there may be the need to conduct demonstration of methods applicability before the real-time techniques are routinely deployed. Third, the availability of real-time data provides the ability to adjust QA/QC requirements for Triad programs while work is underway. For example, at the initiation of a project there may be a relatively higher level and frequency of QA/QC checks on real-time results until sufficient confidence has been gained in system performance, with QA/QC then adjusted to reflect that higher level of confidence. Conversely, real-time QA/QC data may flag potential data quality concerns that require an increase in QA/QC review and/or modifications to the real-time measurement systems used. These types of QA/QC programs are termed "focused QA/QC."
40. Should field-based or fixed-laboratory methods be used?
Under the Triad, collaborative data sets usually provide the greatest value. Collaborative data sets come from a combination of field-based and fixed-laboratory methods. The optimal mix depends on the needs of a specific program. Factors that feed into the decision include the relative per unit costs of analytical options, the total level of analytical work expected, the turn-around times required for analyses, the analytical quality of various options, and the level of regulatory acceptability for various options. Fixed-laboratory methods can often provide "real-time" results (i.e., quick turn-around), albeit at higher costs. Conversely, some field-based methods, with appropriate QA/QC in place, can provide analytical data quality equal to or better than standard fixed-laboratory analyses.
41. Is there a typical approach to establishing the reliability and usability of field-based measurements?
It is vital to demonstrate the usability and reliability of field-based measurement system results in a Triad-based project. Establishing usability/reliability often takes place in two distinctly different phases.
The first phase consists of a demonstration of methods applicability, if required. A demonstration of methods applicability typically involves a pilot study to evaluate the performance of a proposed set of methods under site-specific operational conditions. A demonstration of methods applicability is warranted if there is either insufficient certainty about site-specific method performance, and/or if there is a need to "tweak" method protocols to make sure they are optimized for site-specific needs. A demonstration of methods applicability can be a stand-alone study, or it can be woven into pre-planned characterization activities. As it names implies, the primary purpose of the first phase is to ensure that all involved are comfortable with the proposed set of methods and their associated operating procedures.
The second phase is during implementation of a field-based measurement system and is synonymous with implementing appropriate QA/QC protocols. It differs from a standard laboratory QA/QC program, however, in that there is typically a heavier emphasis on using information from sample splits to help guide the interpretation of data produced by less traditional methods, with the split being analyzed by both the field-based method and a standard laboratory method. However, the splits are not used as the sole mechanism to establish the performance of the field method. Sufficient in-field method QC is expected to independently establish that the method and the operator are performing within expected limits during field implementation. Splits are generally performed (after sufficient attention to homogenization and sub-sample support) for the purposes of demonstrating the comparability between data sets produced by the field method and data produced by more traditional methods. Demonstrating method comparability provides confidence that decisions made on field-generated data are correct. Split samples may be selected randomly in certain situations. However, splits are usually selected for the purpose of managing some aspect of decision uncertainty (e.g., concern about detection limits, interference effects, or changing relationships between indicator parameters being measured and true parameters of concern), so there will be a specific rationale for choosing samples for split analyses.
When reviewing the results from split analyses, it is important to remember that the purpose is not to first of all establish a quantitative correlation between a field-based technique’s result and a standard analysis result (although that may be an outcome). It is to establish that the field-based method is providing information useful to the decision that needs to be made.
42. How much more will a Triad approach cost on the front end of a typical project, and how much savings can be expected over the life of a project?
In general, a Triad approach can be expected to front-end load project costs to a greater degree than a traditional approach would, but also to produce significant cost savings over the life cycle of a project. The additional front-end costs are attributable to larger investments in systematic project planning activities and the potential need for demonstrations of method applicability and/or customization of analytical techniques and associated QA/QC. Cost savings come from a number of sources, including reduced per analysis costs, reductions in overall sample numbers to achieve project goals through dynamic adaptation of data collection work, cost savings associated with compressed schedules and fewer field mobilizations, and cost savings associated with improved remedial action performance (e.g., waste stream minimization). The expected cost savings are highly dependent on site-specific characteristics. In general the potential for cost savings are greater as one moves from characterization to remediation phases. Case studies have reported characterization savings on the order of 50% or greater as compared to data collection programs that develop the same level of confidence in the conceptual site model, but that are based solely on standard analytical techniques.
43. Is there guidance about how project budgeting should change to effectively implement a Triad approach?
The largest impact the Triad potentially has on project financial planning is associated with cost estimation and project budgeting. The dynamic nature of Triad work strategies means that the ultimate extent of field activities may be less defined at the outset of project work than would be the case otherwise. This is because a Triad project expects field activities to develop according to the actual conditions found in the field, whether or not project planners had accurate expectations during project planning. This provides some unique benefits from a technical perspective, including allowing activities to change in response to unexpected field conditions, and permitting work to continue until the objectives of field activities are met. However, this unique benefit also poses additional cost estimation and budgeting challenges.
The reality for all hazardous waste characterization and remediation projects is that their ultimate life-cycle cost is highly uncertain at the project outset. For traditional programs, this uncertainty is addressed by dividing activities into discrete sequential pieces, each of which is planned with relatively fixed budgets and scope. The end result is that life cycle cost uncertainties are reflected in the number of required activities along with associated budget, schedule, and scope creep, but not in the cost of each activity as it comes on-line. In contrast, with a Triad approach there will likely be relatively less uncertainty about the number of field activities required, but relatively more uncertainty about the costs and scope associated with individual activities when they begin. Cost estimation under the Triad requires contingency analysis to determine what the most likely costs scenarios are, the deviations that are plausible in response to unexpected field conditions, and what the cost implications would be of those deviations.
44. What contracting mechanisms will allow me to implement the Triad? How do I procure subcontractor support for dynamic work activities when the final scope of work is unknown? What if I am stuck with a fixed price contracting mechanism?
For sites where the problem is well defined, a fixed-price vehicle with incentives may be preferred. When a project is less well defined, a time and materials contract may be more appropriate. Regardless of the contract vehicle, the key when using the Triad approach is to group logically related activities into unitized rates that can be used to manage costs in response to what is discovered, and to allow expansion or reduction in sampling/data analysis/reporting efforts.
45. What impact will the Triad have on my staffing requirements and associated training needs?
While staffing requirements are mostly the same for the Triad as for more traditional approaches, successful implementation of the Triad approach does require additional or different support in some areas than a traditional approach would. This includes technical support in the development of a sufficiently accurate conceptual site model (CSM), analytical chemistry support, data management support, field decision support (including stakeholder communication), and contracting support. Since the CSM is at the core of Triad systematic project planning, having appropriate technical expertise available is critical to project success. The Triad will also likely produce data collection programs that involve non-standard and/or innovative real-time measurement technologies, requiring a level of field savvy analytical chemistry expertise that might not otherwise be utilized. Because Triad dynamic work strategies result in field activities based on real-time data, the assumption is that data management support is available for handling real-time data in a manner that does not compromise field activities. This can potentially include Geographical Information System, data visualization, database design and management, and laboratory data management skills. A Triad-based set of field activities will also likely require a higher level of involvement by senior technical staff during implementation than a traditional program would to assist with decision-making and determine what should be done in response to unexpected results. Finally, a Triad approach may require flexible contracting mechanisms that are different than a traditional or fixed price approach, and so there may be the need for greater procurement and contracting involvement with project planning.
46. How do I effectively communicate dynamic changes in data collection to my field staff? How do I organize the work effort to keep this from becoming chaotic?
Triad-based data collection programs can generate literally thousands of data points a day. A well-planned Triad field activity will leverage this data to produce a final characterization or remediation result that is more efficient and effective than the traditional alternative. A poorly-planned Triad field activity can become chaotic in response to information overload. In this situation, there is the opportunity for making incorrect decisions or for work slow downs, and much of the benefit of the Triad approach may be lost. Effective communication and organization are critical key elements of Triad work activities. The details will vary from site to site and activity to activity, but in general a well-planned Triad work effort will include at minimum a description of data to be generated, formats that are expected, how that data will flow, what QA/QC must be applied, how data will be organized and depicted, which decisions it will feed, and who has responsibility for data at each step of the way. Critical path analyses can be useful for debugging decision-making strategies and work-flow elements that depend on real-time data. Readiness reviews, including "dry runs" with dummy data sets, are useful for demonstrating work plan adequacy. The ability to quickly generate daily field maps of results is important. Secure project web sites can be a very effective means for disseminating project-critical information in "real-time" to project technical staff, project management, and regulators.
47. How long are field measurement locations and results valid since it may not be possible to archive samples and locations may not be protected?
The issue of sample archiving and/or preservation of the original support for measurement results can be important in the context of data defensibility. This question capture two distinctly different issues: (1) How does one preserve the original sampled media to allow replicate analyses in the future to validate the original result? (2) How does one ensure the sample remains representative of the decision unit (e.g., barrel, area of contaminated sediment, groundwater point of compliance, etc.) over time?
For ex situ sample analyses, (real-time or traditional) preserving sampled media is straightforward. If sample preservation is a requirement for a program for whatever reason, arrangements can be made for sample archiving that insure the long-term integrity of the sample and that controls chain of custody. For measurements that are in situ, or in other words that measure some key parameter directly for environmental media, the issue becomes one of being able to document the exact measurement location and preserve that location’s state. One must recognize that even if one were able to both exactly identify the location of an original in situ measurement and protect its location, natural variability in environmental conditions and processes would likely lead to different measurement results over time. The real issues in this case, however, are not whether a particular measurement is exactly replicable, but whether one would expect the decision that was drawn based on that measurement to have been different, given the potential range in repeated measurement values over time, and secondly, the length of time over which the field measurement can be expected to be valid. The latter is very site-specific, depending on the nature of the contaminants of concern, the influences of the environmental matrix, and contaminant mobility and fate over time. A related issue is the question of chain-of-custody for sampled media. For field-based methods, the chain-of-custody can be significantly shorter, or even potentially completely eliminated, as compared to traditional fixed-laboratory programs.
How to maintain sample representativeness is a different question and a problem common to both in situ measurement systems and ex situ sample analyses. Particularly for sediments and groundwater, active environmental fate and transport processes virtually guarantee that sampling results will not be representative of the state of the system for any extended period of time. The issue of sample representativeness over time plays to the Triad’s strengths: the ability to quickly and relatively cheaply replicate measurements where there are questions about the current representativeness of historical data sets.
48. What contingencies are necessary to assure that project progress is not hampered?
Proper contingency planning is important for Triad-based field activities to be successful. Once field crews are mobilized (whether for characterization or remediation purposes), down-time due to equipment failures or delays in decision-making translate directly into increased project costs. For field-deployed analytical techniques, per sample costs are primarily a function of sample through-put. Lower through-put because of down-time or delays means higher analytical costs.
There are three primary technical causes for delays in Triad activities that should be addressed by contingency planning. The first is equipment malfunction or problems with analytical method performance. The second is an unexpected project outcome (i.e., an unexpected sampling result or remediation finding) that requires deliberation before work proceeds. The third is delays in information analysis and sharing that prevents timely decision-making. All three possibilities should be addressed through contingency and logistical planning so that their impacts, if they occur, can be minimized. Readiness reviews combined with "dry runs" provide an effective means for testing and debugging proposed field activities and associated contingency plans.
There is a fourth programmatic cause for progress delays that is related to timely decision-making, and that occurs when decision-makers (e.g., a Triad core team) are not available as needed when decisions must be made. At the very least, project planning should include mechanisms for reaching time-critical decisions even if all pertinent decision-makers are unavailable for whatever reason at any point in time.
49. What is the most logical sequence of activities to reach project goals?
Systematic project planning forms the basis for Triad field activities. In general, systematic planning entails three primary elements that are basically sequential in nature. The first forms a project core team, identifies objectives, constraints, and the regulatory framework in which work will take place, and specifies the primary and secondary decisions that will be made. The second constructs and maintains a conceptual site model (CSM) that captures information pertinent to the primary and secondary decisions. The third evaluates and manages the uncertainty associated with decision-making in the context of the CSM. In practice there are a number of systematic planning frameworks that can be used to work through these elements, including EPA’s Data Quality Objective Process and the USACE’s Technical Project Planning process.
50. At what frequency should senior staff review field and laboratory activities?
The level and frequency of senior staff review of field and laboratory activities will be determined by site-specific needs, but in general the level of involvement of senior staff in reviews during work activities will be greater than what would have occurred in a more traditional approach. This additional level of involvement, however, will be balanced on the back-end by potentially less intensive review requirements for final products by senior staff. Planning documents based on dynamic work strategies that result from systematic project planning should incorporate "if-then" logic to support in-field decision-making. The extent to which such logic captures the "surprises" encountered during field activities will determine the level of involvement of senior oversight staff during field activities. Obvious key points of review are when decisions need to be made about the performance and acceptability of real-time measurement data, when QA/QC flags potential problems with work underway, or when an unexpected situation arises that was not fully covered by contingency plans contained in work plans.
51. Are health and safety requirements different for a Triad approach?
Many health and safety requirements are not different for a Triad approach. However, if field methods are used, this should be addressed in the health and safety plan. Some methods may use solvents or standards that are potentially hazardous. Whether or not personal protective equipment (PPE) is needed for operating the method should be assessed. Handling and disposal of analytical wastes and samples need to be planned out. A disturbing trend is for field analysis to be conducted in hotel rooms. Contaminated samples should not be brought into hotel rooms, and potential spillage of solvent and standards in public places must be avoided. The health and safety plan for a Triad project should guide the field crew on what analytical-related activities may be permissible in hotel rooms or other public places, and which are not.
52. Does the Triad change project documentation requirements?
The Triad does not, in general, change project documentation requirements from a planning perspective. The Triad may introduce additional project documentation requirements during implementation. For example, one may need to document the results from the demonstration of methods applicability. There also will be a need to provide additional documentation to support decisions that have been made in response to real-time data, in particular when these decisions were not covered by contingency plans contained in original work plans.
53. How is the Triad related to the EPA DQO process?
The Triad emphasizes the need for systematic planning encompassing all project decisions and activities, including data collection. EPA’s Data Quality Objective process is one example of a systematic planning process designed to support data collection that can be used in the context of a Triad approach. Triad systematic planning is broader than the DQO process in that it considers project activities that extend beyond just data collection. Historically the DQO process has not been used for developing the types of dynamic work strategies envisioned by the Triad, but there is nothing inherent to the structure of systematic planning using DQOs that prevents this. In addition, some practitioners view the use of classical statistical hypothesis testing to be inseparable from the DQO process. The Triad approach is open to all tools, such as statistics, that support management of decision uncertainty. However, due to the heterogeneous nature of many contaminated sites, care must be exercised when selecting the appropriate statistical tools. Geostatistics (which is designed to cope with spatial patterning of contamination) may be more appropriate than classical statistics (which assume that no spatial patterning exists). Statistical tools selection must be considered during systematic planning in conjunction with the conceptual site model.
54. What is the relationship between the Triad and ASTM investigation/site assessment standards?
ASTM site investigation/assessment standards attempt to encapsulate best practices that have been demonstrated and have broad technical acceptance in a replicable set of methods and approaches. For example, there already is an ASTM standard for conducting Expedited Site Characterization work (D6235-98a), one of the historical improvements in the characterization process that the Triad has built on. There are also ASTM standards for accelerated site characterization of suspected petroleum releases (E1912-98) and for developing conceptual site models (E1689).
ASTM standards are a point of departure for systematic project planning. Specific environmental data collection technologies and analytical or measurement techniques are, however, under constant refinement and innovation. These technological advances can open the door to improvements in the characterization and remediation process. The Triad leverages these technological advances to improve decision-making in the characterization and remediation process.
55. What is the extent of regulatory acceptance of the Triad? What State regulatory agencies have bought into the Triad?
As the case studies contained in the Triad Resource Center demonstrate, the Triad approach has been used successfully at sites across the country that involve a wide range of contaminants of concern, and that represent a wide range of regulatory environments. The EPA is constantly working internally with EPA regional offices, as well as other state and federal agencies, to gain regulatory acceptance and understanding of the Triad approach. As an example, the State of New Jersey’s Department of Environmental Protection, with the assistance of EPA, has actively embraced the Triad and is working to integrate the Triad into its regulatory scheme. As with most advances in characterization and remediation technologies/approaches, however, the ultimate regulatory acceptance rests with the individual local, state, or federal agency staff member assigned to oversee work at a site.
As more and more projects are done applying all (i.e., a Triad project) or some (i.e., a Triad-like project) Triad principles, the benefits of the overall approach are becoming more obvious to regulatory programs. It is too early in the transition process to be able to provide detailed "how-to" instructions that would be appropriate to different federal and state cleanup programs. However, regulatory programs are signaling their acceptance of Triad projects by documenting successful projects and providing overview documentation of general concepts.
For example, EPA’s Superfund program has developed a document entitled "Using Dynamic Field Activities for On-Site Decision Making: A Guide for Project Managers." This document discusses the use of dynamic work strategies and of field methods in a dynamic context and demonstrates EPA’s acceptance of dynamic approaches to Superfund project management. Case study examples of Superfund projects using dynamic work strategies in the 1990s are covered in Chapter V, with more detailed case study discussions available as separate documents. The guidance and case studies can be accessed through the Superfund Dynamic Field Activities homepage at http://www.epa.gov/superfund/programs/dfa/index.htm.
In addition, the Interstate Technology and Regulatory Council (ITRC) has prepared a document entitled, “Technical and Regulatory Guidance for the Triad Approach: A New Paradigm for Environmental Project Management,” which was released January 2004. This document is designed to acquaint state regulators with key Triad concepts and benefits, and address some expected State regulator concerns. It is available in paper and electronic forms through the ITRC website at http://www.itrcweb.org. Choose “Guidance Documents” from the main menu, then click the “Sampling, Characterization, and Monitoring” button.
56. How can I convince my State and Regional regulators of the value of a Triad approach?
The existing documentation mentioned in Questions 54 and 55 can be shown to regulators to demonstrate that the ideas and strategies used by the Triad approach are not brand new or untried at the level of federal or state waste programs. However, from a regulator’s perspective, the value of the Triad for any particular project will rest in its ability to improve decision-making quality with the same level of resource investment. This means simply that it is less likely that residual contamination that would pose a human or ecological concern will remain at a site once characterization and remediation is complete. Among the primary stumbling blocks preventing regulatory acceptance are questions about the performance and utility of proposed real-time analytical techniques, and the quality of decisions that will be made within a dynamic work strategy. The Triad Resource Center contains case studies that demonstrate the benefit of these types of techniques within a Triad approach, and that set precedent for their use in a variety of settings. Lingering site-specific concerns can be addressed using demonstrations of method applicability, and through the design and implementation of proper QA/QC programs. Another primary stumbling block that is more difficult to address is the adversarial nature of cleanup activities at some sites, and the mistrust bred under such circumstances. In these cases static work plans developed under a more traditional approach provide some assurance to regulators about what exactly will be done. Dynamic work strategies can raise concerns about the responsible party making inappropriate decisions during the course of field activities. In this situation, a Triad core team comprised of regulators, technical support staff as appropriate, and project management that is fully engaged with planning and implementation activities is critical to ensuring a transparent work effort and consensual, technically-based, defensible decision-making.
57. How can I engage my regulators so that they can support near real-time data review and decision-making under a dynamic work strategy?
Regulator participation is important for the successful deployment of a Triad approach. There are three key areas where regulatory participation/involvement goes beyond what typically is required under a more traditional approach. The first is conceptual site model development and review. The second is selection and vetting of dynamic work strategies, along with supporting real-time measurement systems. The third is involvement with decisions during work activities on an as-needed basis. The use of a Triad core team to support decision-making can be one effective means for securing regulatory participation. The requirements for a successful team are that members are committed to working through issues in a non-adversarial manner, that they are available to make timely decisions, that they are in it "for the long haul," and that they can speak with confidence for their respective agencies. The need for unforeseen decision-making on-the-fly should be minimized for dynamic work strategies and accompanying plans that are comprehensive in their contingency planning. When decisions need to be made during field activities, effective information organization, presentation, and dissemination capabilities can be essential to timely decision-making. The use of secure project support Web sites is an effective means for presenting and disseminating real-time data to core team members who may be physically dispersed.
58. Is the Triad accepted by the military, the Department of Energy, and other federal agencies?
The Triad has been applied successfully at Department of Defense, Department of Energy, and Department of Agriculture sites. The primary value of the Triad for these agencies (and other responsible parties) is its ability to achieve project goals at reduced costs and potentially within accelerated schedules. As of Fall 2003, because of the success of early Triad pilot projects, the Air Force began actively exploring at its highest programmatic levels how to expand its technical capacity to support Triad projects at a national level.
59. How can primary decision-makers assure the quality of decisions made in the field and justify changes made in initial sampling plans?
Work plans that incorporate dynamic work strategies will include "if-then" logic and contingency plans to guide decision-making in the field. The primary issue this question raises pertains to decisions that are not anticipated by the work plan guiding field activities. In these cases, the work plan should specify the decision-making process and level of technical and/or Triad core team review required to support the decision so that the decision-making process is clear to field staff. Where the significance of a decision is not great (e.g., moving a sampling location a short distance because of access problems), the decision may be one that can be made and documented by field staff without additional review. Other decisions (such as changing the real-time measurement methodology used because of performance issues) will require a more formal review and approval process by senior technical staff and/or the Triad core team. Decision-makers can assure the quality of decision made in the field by picking the right combination of experience and expertise for the makeup of the core team.
For more details, see QC/QC Dimensions and Dynamic Work Strategies.
60. How is actual work documented under Triad if it is not pre-specified in a work plan? How does one ensure that the results are legally and technically defensible?
As with any other project activity, one likely outcome of a Triad-based activity is a report documenting the work that was done. This provides the venue for documenting work that was undertaken. While a Triad-based dynamic work strategy does not necessarily provide a complete set of pre-specified activities, the work strategy should be explicit about the decision-making logic that will guide work as it proceeds. Activities required to ensure legal and technically defensible decisions should be specified in any work plans that are developed as part of the systematic planning process. Again, in this context the Triad does not impose any requirements that are less than or beyond what traditional work activities would.
61. How is an independent third party verification/validation performed under the Triad?
Independent third party verification/validation work is one way to ensure that decisions produced by project activities are legally and technically defensible. Independent third party verification can take several different forms. It can simply involve a third party review of data packages and associated QA/QC information to determine that data quality assurance goals have been achieved. It can include split analyses of samples taken during field activities to provide an independent check on analytical quality and analytical method performance. It can include independent data collection in addition to data collected as part of project activities to assess the overall quality of the sampling and analysis process. In all three cases, use of the Triad does not impose any peculiar requirements on third party verification/validation activities, other than the fact that there should be consistency and coordination in comparing data sets that may potentially be based on different analytical methods.
62. Can a site be released (No Further Action) based on field measurements alone?
In some cases, yes. There are field measurement systems that, with the proper in-field QA/QC protocols in place, can be expected to produce data that are of the same or better analytical quality than one would obtain from a fixed laboratory using standard techniques. However, close communication and negotiation with the regulator during systematic planning will generally be required to determine relevant regulatory requirements and what level of reassurance the regulator will wish to see in the site closure data set.