Skip Navigation

               right corner decorative image
bottom of triad decorative header image spacer image glossary image
spacer image
spacer image Searchspacer image
decorative image
Triad Overview Triad Management Regulatory Information Triad FAQ User Experiences Reference/Resources
     
Triad Management
 Real-Time Measurement Systems
 Key Concepts
 Categorizing Analytical Technologies

Glossary: Search and browse definitions

Index: Search and browse document index

Acronyms: Search and browse acronyms

Frequently Asked Questions





Multiagency support for Triad
Triad is a Federal/State Interagency Partnership


Categorizing Analytical Technologies

Examples of categorization schemes for analytical and measurement technologies, and their relevance to the Triad.

There are many ways to categorize analytical and measurement technologies. These categorization schemes provide some insight into technology differences, the quality of data they produce, and their applicability for the Triad. In all cases, there is no "bright line" that separates one methods category from another. Rather, in every case categorization divides a continuum of technologies (and/or the data they produce) into rather arbitrary groups. Examples of categorization schemes and their relevance to the Triad include:

  • Real-Time versus Not Real-Time Techniques

    As already discussed, "real-time" within the Triad context simply refers to the ability of a measurement system or analytical technique to produce data soon enough to affect decisions while work is underway. Real-time data can include data that are truly instantaneous (e.g., a gamma walk-over survey), available within a few minutes (e.g., an in situ XRF reading), produced within an hour or two (e.g., PCB test kit analyses), available within a day (e.g. a mobile on-site laboratory), or delivered within a couple of days (e.g., rapid turn-around from a fixed laboratory). Under this definition, a technique may be "real-time" for the purposes of one activity (e.g., supporting characterization work spread over several weeks), but not for another (e.g., supporting dig/no-dig decisions while remediation is underway). When reviewing the potential domain of candidate "real-time" technologies for a particular project, it is critical that the associated time-frame for decision-making be clearly understood and articulated. Almost every existing analytical technique or methodology is, in general, a candidate "real-time technique" from a Triad perspective.

  • Standard versus Non-Standard Methods

    The term "standard method" refers to a base measurement technology or analytical method as it is routinely implemented by fixed-laboratories and referenced by state or federal agencies in written documentation (such as EPA's SW-846 methods compendium). The term "non-standard method" includes both standard methods that have been modified to meet a particular site's needs, and techniques or methods that are truly different from standard methods in their base technology. Ironically, field-associated methods (such as XRF for metals and immunoassay test kits) that are often considered non-standard by regulators because they are not traditional fixed laboratory methods have been part of the SW-846 methods compendium for years. Their long-time inclusion in SW-846 should confer "standard method" status on these field methods for regulatory programs that consider SW-846 methods as "approved" methods by their waste program regulations.

    From a Triad perspective, whether a method is "standard" or "non-standard" is irrelevant as long as the data produced contributes in a cost-effective manner to the decisions that need to be made. The environmental community often assumes that the term "standard" implies a level of data quality that is higher than that obtained by "non-standard" methods. Although it may be true that standard techniques have more name recognition, wider acceptance, and well-documented analytical performance, it is not necessarily true that a standard technique will yield higher analytical quality data than a non-standard technique for a specific application. In fact, the opposite is often the case, since non-standard techniques can include modifications to enhance their performance in the context of a specific site (e.g., modifications to sample preparation or cleanup procedures to address potential interferences). Because of the Triad's emphasis on real-time technologies and project-specific performance goals, in general at least some of the techniques implemented as part of a Triad approach will be non-standard in their application.

    Standard methods will also likely serve a role in a Triad program because they are often the comparison basis for real-time methods that are employed as part of a Triad approach.

  • Fixed-Laboratory versus Field-Deployable Technologies

    Fixed-laboratory techniques are those routinely deployed in a fixed-laboratory setting. Field-deployable technologies are those that can be brought to a site and deployed either directly in the field or as part of an on-site mobile laboratory. In many cases, the same base technology that is used in a fixed laboratory can be deployed in the field with appropriate modifications. Conversely, a technology that is inherently mobile in design can also be implemented in a fixed-laboratory setting. As with standard methods, fixed-laboratory methods are often assumed to automatically yield data of higher analytical quality than field-deployable technologies. This is not always true. The use of field-deployable technologies at a site eliminates links in the "sampling and analysis" data quality chain that can contribute to analytical uncertainty, such as contaminant loss from degradation or volatilization as a result of sample handling and transport. Triad real-time data can be obtained from fixed laboratories, but more commonly result from field-deployable techniques.

  • Definitive versus Screening

    "Definitive" implies something is associated with insignificant levels of error or uncertainty. "Screening" implies the opposite. Within the environmental field, these two terms are applied to a wide variety of situations in ways that suggest that there is a firm distinction between them. Thus, practitioners routinely refer to screening versus definitive analytical methods, and screening versus definitive quality data. The universal assumption has been that screening methods automatically generate screening quality data, while definitive methods generate definitive data. The follow-on assumption is that decisions based on "screening data" are automatically uncertain, whereas decisions based on "definitive data" are confident. This notion that "definitive methods = data (quality) = decision confidence" summarizes the first-generation data quality model developed as a starting point for the nascent environmental field in the 1970s and 80s. Although this over-simplified first-generation model was a very useful starting point, it cannot cope with the complexities of environmental analytical chemistry and the highly heterogeneous nature of most environmental matrices. For the environmental cleanup field to progress, this data quality model and the regulatory structures based on it need to be updated to match today's knowledge and experience.

    The Triad approach uses a second-generation data quality model that begins with acknowledging that environmental matrices are fundamentally heterogeneous in composition and contaminant distribution at both macro and micro scales. Compositional heterogeneity affects the design and performance of analytical methods. Excellent analytical performance on an idealized matrix, such as reagent water or clean laboratory sand, does not guarantee equally good performance on real-world soils or sediments. Distributional heterogeneity of pollutants means that even highly accurate analyses on tiny samples may produce results that are not representative of true concentrations in the bulk material from which they came.

    While common in the environmental field, the term "definitive" can be misinterpreted when used to describe methods and data. "Definitive" implies a confidence level that may be unsubstantiated, especially when the numerous sampling and analytical variables inherent to environmental analysis and data interpretation are not controlled. The complexity of environmental matrices means that for most common contaminants of concern, standard widely-accepted methods are not truly "definitive." The analytical quality of the data they produce may be affected by interferences present in the original matrix, and/or intrinsic compound-specific quantitation capabilities. For example, the SW-846 determinative Method 8260 (GC-MS) is described as being able to detect and quantify more than 100 different compounds. However, the particular sample preparation method used in advance of Method 8260 (such as Method 5030 for the purge-and-trap sample preparation of water samples for volatiles analysis) and actual implementation of Method 8260 will provide much better performance for some of those compounds than it will for others on the list. The reality is that all analytical and measurement technologies produce data that fall on a continuum of analytical quality. On this continuum, methods may be described as relatively more definitive than others in terms of analytical quality. The same method (e.g. SW-846 Methods 5030/8260) may produce data whose analytical quality varies depending on the contaminant, the matrix, the level of sample preparation and cleanup, operator experience, selected calibration range, and associated QA/QC protocols.

    Because the Triad approach focuses on defensible decision-making, the Triad views the true measure of "definitive" data as their ability to consistently lead to the correct decision. This definition of definitive, however, must encompass more variables than simple analytical quality for the instrument determining the analytical response. From a Triad perspective, classifying methods or technologies as "definitive" or "screening" and then assuming that ensuing data quality will be equivalent to the technology's classification is misleading, and obscures the true measure of a measurement technology's value: its contribution to overall decision-making quality.

  • Known Quality Data versus Unknown Quality Data

    Data are of known quality when the contributions to their uncertainty from sampling, analytical, and relational uncertainty can be estimated (either qualitatively or quantitatively) with respect to the intended use of the data, and the documentation to that effect is verifiable and recognized by the scientific community as defensible. Data are of known quality when produced under a quality assurance (QA) program that includes associated quality control (QC). QC checks establish that both the method and the operator are performing within expected limits. Since the Triad is concerned with managing decision uncertainty, an additional focus for QC under the Triad is to tailor QC checks toward specific analytical uncertainties. For example, if the real-time field program is finding that nearly all sample results are non-detect, the QC program might be modified to increase the number of matrix spikes (spiked near the detection limit) to demonstrate that the field method is truly capable of detecting the contaminants were they present at the specified detection limit. On the other hand, if many samples are showing detections, then there is no need for a high density of that QC check during project implementation.

    In contrast, data of unknown quality are data for which critical steps in the data generation process, such as instrument checks, tests of operator proficiency, and other forms of QA and QC, are improperly performed or have not been documented, creating considerable uncertainty about whether the data are credible. Standard methods from fixed laboratories do not necessarily produce data of known quality. Neither do all mobile or non-standard analyses. Conversely, both categories of methods are capable of consistently producing data of known quality with proper QA and QC in place. Obtaining data of known quality is critical for the Triad, no matter what the source of that data is.

  • Decision Quality Data versus Screening Quality Data

    Measurement technologies can be categorized as producing either decision quality data or screening quality data. Decision quality data are data (individual points or set) of known quality that can be logically shown as effective for making scientifically defensible project decisions without requiring additional data or information to back them up, because the relational, sampling, and analytical uncertainties in the data have been controlled to the degree necessary to meet clearly defined decision goals. Triad practitioners also refer to this type of data as "effective data," reflecting its value as "effective for decision-making purposes." In contrast, screening quality data are data (individual points or set) that may provide some useful information, but are not sufficient to support project decision-making because the amount of uncertainty (stemming from sampling uncertainty, analytical uncertainty, relational uncertainty, or other considerations) in the data set is greater than what is tolerable. Screening quality data place decision-makers in the "region of decision-making uncertainty," a concept that will be discussed in greater detail a bit later.

    When data that would be considered screening quality (if considered in isolation) are combined with other information or additional data that manages the relevant uncertainties, the combined data/information package may become effective for decision-making. When data sets produced from different technologies are used collaboratively to manage sampling, analytical, and relational uncertainties, the Triad refers to these as "collaborative data sets." From a Triad perspective, usually the most cost-effective source of decision quality data is when two or more analytical technologies are used collaboratively.

  • CERCLA Analytical Method Classification Scheme

    As part of the CERLCA program, in 1988 the EPA published a classification scheme for analytical methods based on the intended uses of the data. The scheme included five levels, ranging from Level I (field test kits and portable instruments for vapor detection) used for site characterization and monitoring to Level IV (GC/MS, ICP, AA, etc.) used for risk assessments, PRP determination, evaluation of alternatives, and engineering design. Level V covered non-conventional approaches and modifications to existing methods. Levels I through IV defined a continuum of analytical quality, with presumptive decision usage attached to each level. This data classification scheme was discarded by EPA in the 1990s, but some organizations continue to use a similar scheme. For the Triad, the utility of data produced by any particular method or system is based on their value to site-specific decisions that must be made. Therefore, arbitrary schemes that classify data worth based solely on the type of analytical technology used to produce it are not useful within the Triad approach. In addition, the Triad recognizes that any method within the continuum of levels may fail to produce decision-quality data for particular contaminants of concern contained in a site's environmental matrix.

  • Approved versus Other Methods

    One last way analytical methods are often classified is based on their acceptance or recognition by federal or state regulatory agencies and/or programs. An example is the SW-846 catalog of methods maintained by the RCRA program. As noted previously, many methods typically associated with field analysis are already included in SW-846. Regulators should take comfort from the knowledge that these methods have been thoroughly evaluated for their ability to provide reliable data when used appropriately. A second example is the set of methods commonly used by the Superfund Contract Laboratory Program. Other examples are the various laboratory and methods certification programs administered by state agencies.

    These methods catalogs are useful departure points when reviewing the analytical options for a Triad program. The methods they contain have achieved a certain level of respectability, and in many cases acceptability, within the state and federal regulatory community. They are usually readily available. From a Triad perspective, however, two important points should be kept in mind. First, just because a particular method is contained in SW-846, for example, does not mean that it is applicable without modification or is even the most appropriate method for the needs of a particular site. This fact is stated within the SW-846 methods manual itself (Chapter 2), although few realize that SW-846 acknowledges that analytical flexibility is required to accommodate the scientific needs of waste programs. Analytical flexibility is the basis of EPA's Performance Based Measurement System (PBMS) initiative. Second, just because a method is not contained in a particular catalog of methods does not mean that it is inferior for a particular site application. Due to budget and workload constraints, it usually takes several years before a new technology, no matter how superior, can be incorporated into established methods catalogs.





Home | Overview | Triad Management | Regulatory Info | User Experiences | Reference/Resources
News | Glossary | Document Index | Acronyms | FAQs
Privacy/Security | Site Map | Contact Us