Triad is a Federal/State Interagency Partnership
Key Triad Components
QA/QC within the Triad shares common goals and dimensions with traditional programs, but also has unique, Triad-specific components.
The goals of Triad QC components are no different than those of a more traditional program: to generate data of known quality whose quality characteristics are documented, verifiable, and technically defensible, and to identify in a timely manner issues or problems that will adversely affect performance and that require attention. Along with these common goals there are QC components that are shared between Triad-based programs and more traditional efforts. These are key to the Triad and include:
- The need for well-defined performance goals and associated metrics that provide the basis for effective QA/QC activities. For analytical methods these goals will likely be defined in terms of required precision, accuracy, representativeness, completeness and comparability.
- The need for a Quality Assurance Project Plan (QAPP) that documents information pertinent to the design and implementation of a project-specific QA/QC program. The role of the QAPP is the same for a Triad-based project as it is for a more traditional approach; however, the Triad QAPP will differ in one important respect. Flexibility will need to be built into the QAPP so that QC controls can be adjusted as data collection progresses to focus on those elements that directly relate to uncertainty, and ultimately, project success.
- The need for a technical team member with authority over and responsibility for the QA/QC associated with project activities. Sometimes termed a Quality Assurance Officer, this person is tasked with ensuring that QA/QC requirements are met, adjusting QA/QC as necessary to reflect changing site or program realities, and implementing corrective actions in response to QA/QC concerns. In the case of the Triad, this person needs to be aware of real-time measurement technology-specific QC requirements and concerns that may be peculiar to individual projects. This person should be part of the systematic planning team. In addition this person (or designee) should be assigned to the field team so that data quality can be assessed and tracked as data are generated.
- The need to establish a "culture of quality" among participating technical team members. This recognizes that the first line of defense against performance problems lies with every technical team member, whether they are specifically tasked with monitoring QC for a program or not. This requirement is heightened for Triad-based programs since data are available in "real-time" and need to be vetted quickly for quality concerns so that decision-making can proceed confidently. Field team members generating real-time data have the first opportunity to identify potential performance problems that may be emerging, along with their causal factors.
- The need for detailed Standard Operating Procedures (SOPs) for all quality-critical program activities. SOPs guarantee the consistency of implementation quality for a particular activity regardless of the personnel involved. SOPs also provide a mechanism for documenting and evaluating the acceptability of generated data once work has been complete.
Triad-based programs also include QA/QC components that are either unique or significantly different in scope and nature from what a traditional program would require. These include the following:
- Method Applicability Studies. Method applicability studies are also known as demonstrations of method applicability. Method applicability studies may be required if there is uncertainty about the potential performance of an analytical technique, or additional site-specific information is needed to optimize the implementation of a particular technique. Method applicability studies are not necessarily unique to Triad-based programs, but the potential need is greater considering the emphasis on field-deployable real-time measurement systems. Traditional data collection programs that involve only standard fixed-laboratory techniques can also often benefit from a careful look at how site-specific matrix realities might affect expected generic analytical performance (e.g., the presence/absence of interferences, the range contamination levels expected, etc.). The results from a method applicability study are closely tied to the design of an effective Triad QA/QC program. The results determine the analytical performance that should be expected, and so form the basis for establishing performance goals that on-going QC will monitor. The results identify potential "weak links" in the data quality chain that should receive special attention from a QC perspective (e.g., the potential for sample preparation problems, particular matrix concerns that might be present, instrument calibration drift in response to changing environmental factors, etc.). The results of these studies also support the development of appropriate site-specific SOPs.
- Customized QC. For traditional data collection programs that rely on standardized, fixed laboratory analyses, QC is often pre-specified and standardized. Commercial analytical laboratories are often required by various certification and licensing organizations (e.g., the National Environmental Laboratory Accreditation Conference (NELAC), the U.S. Air Force Center for Engineering and the Environment (AFCEE), and various state laboratory licensing agencies) to produce data in a regimented, pre-defined manner specified by method and laboratory-specific SOPs. This includes calibration regimes, laboratory QC samples, internal standards, surrogate analyses, etc. In contrast, real-time measurement systems that are field-deployed typically have fewer pre-defined QC requirements. This provides the opportunity to develop QC protocols for these systems that are customized to the site-specific needs and performance goals of a project. The concept of customized QC requirements is consistent with EPA's Performance Based Measurement System (PBMS) initiative. PBMS provides the site investigator, regulators, and stakeholders the leeway to adjust method specifications (including QC) to address site-specific needs and issues.
- Focused QC. A QA/QC characteristic unique to the Triad is the ability to design focused QC protocols that include dynamic strategies for monitoring method performance. Focused QC refers to the ability to adapt or refocus QC efforts as a project proceeds in response to changing project needs or site conditions, leveraging the real-time data generation capabilities inherent in Triad data collection programs.
In focused QC programs, the intensity or frequency of QC activity can change over time. For example, as measurements become routine and the sources of analytical variability understood, the frequency of some types of QC samples (matrix spikes and matrix spike duplicates) or protocols (source checks, calibration checks, etc.) may be reduced without affecting analytical data quality. In this setting, QC activity may be "front-end loaded" to flush out any performance problems early on in the life-cycle of field activities.
The intensity or frequency of QC activity can also change in response to the changing needs of characterization or remediation work. For example, as data collection objectives move from producing data suitable for risk assessment use to hot spot identification, QC protocols may be relaxed. Conversely, if data collection switches from supporting a remedial action (e.g., contaminated soil removal) to site closure documentation, QC protocols may become more stringent.
The intensity and frequency of QC activity can also change in response to specific site conditions or data results. For example, if the real-time technique is finding that nearly all samples are producing non-detect results, the QC program might be modified to increase the number of matrix spikes near the detection limits to verify that the field method is capable of detecting the contaminant, if it were present at or near the specified detection limit. As another example, sample matrix characteristics might change unexpectedly (e.g., higher moisture content, increased organic carbon content, evidence of other potential interfering factors) and warrant closer monitoring of method performance to ensure goals are consistently being met.
- Appropriate Data Review. Post-data collection data review (i.e., data verification and validation activities) is a critical component of any QA/QC program. For Triad-based activities, a key question is what level of data review is necessary to support confident decision-making, and what review steps are required for final documentation but can be done in a less time-critical fashion. The answer to this question depends on the level of data uncertainty present and the implications of making wrong decisions. For example, the intensity and completeness of data review would be significantly different if real-time data sets were being used to support clean closure decisions before backfilling took place than it would be if those same data were being used to guide contamination footprint delineation.
- Logistical Considerations. Appropriate levels of QA/QC are closely tied to logistical considerations for a Triad program. QA/QC evaluation takes time. Time is often of essence when field activities are underway and decisions need to be made based on real-time results. Integrating decision-specific QA/QC needs with overall field activities is a requirement unique to the Triad. For those situations where QC requires the availability of off-site, fixed laboratory results as a point of comparison, one must recognize that there may be a significant time lag between the production of real-time data and when corresponding fixed laboratory results will be available for review. In the context of logistical concerns, readiness reviews and "dry-runs" can be an important QA component of a Triad program to ensure that logistical considerations associated with QC protocols have been sufficiently addressed.
QA/QC activities are critical for demonstrating both data and decision quality. The demonstration of data and decision quality will be based on the weight of QA/QC evidence. For these reasons it is important that stakeholders, and particularly regulators, understand and accept the logic underlying proposed QA/QC activities.