Skip Navigation

              
 glossary
Search
Triad Overview Triad Management Regulatory Information Triad FAQ User Experiences Reference/Resources
     
Triad Management
 Real-Time Measurement Systems
 Technology Implementation

Glossary: Search and browse definitions

Index: Search and browse document index

Acronyms: Search and browse acronyms

Frequently Asked Questions





Multiagency support for Triad
Triad is a Federal/State Interagency Partnership


Technology Implementation

Real-time decision support, appropriate standard operating procedures, method applicability studies, and strategies for collaborative data analysis are all important dimensions for successful measurement technology implementation.

General logistical issues associated with implementation of the Triad are discussed in greater detail in the section entitled Logistical and Implementation Considerations. This section explores implementation dimensions that are closely associated with real-time measurement technologies.

Real-Time Decision Support

The Triad definition of real-time measurement systems includes sample acquisition technologies and data management/analysis activities once data have been generated. Historically, data management for field-deployable systems consisted of recording results in field notebooks, and then implementing some form of filing system for field data. This approach will fall far short for many projects of what is required to support timely, technically-defensible decision-making based on real-time measurement results. Real-time measurement systems that are field deployed typically come without the support structure associated with a long-established fixed laboratory. Consequently special attention needs to be paid to how data and field records that are generated by real-time measurement technologies will be managed.

The overall issue of data management for Triad programs is addressed in greater detail in Logistical and Implementation Considerations. For the purposes of this section, however, it is important that as real-time technologies are selected and standard operating procedures defined, those procedures address how data and field records will be recorded, reported, and disseminated. Particular attention must be paid to developing a "field laboratory" data management scheme that can deliver on-time information in a readily usable format and maintain a record of sampling and measurement activities.

Standard Operating Procedures

The development of standard operating procedures is another critical element of successful real-time technology deployment. Unfortunately, in the past many field-deployable real-time measurement techniques were treated as producing data of marginal utility and quality, and consequently little attention was paid to describing standard operating procedures that could support replicable, technically-defensible measurements. The result was data of "unknown quality" that by definition would have little value for decision-making purposes. The Triad has much higher expectations for the role real-time measurement systems will play as part of dynamic work strategies, and consequently also much higher expectations for the level of planning and rigor involved with deploying these technologies. The ultimate goal of standard operating procedures for the Triad is the production of data of known quality regardless of the measurement technology used, with that quality sufficient to meet decision-making needs.

Method Evaluation

The section QA/QC Dimensions discusses quality assurance and quality control characteristics unique to the Triad. Among those is the concept of real-time system method evaluation. Method evaluation refers to the process used to determine that a real-time measurement system's performance characteristics are adequate for the role it will be required to play. Evaluation also refers to the process used to monitor real-time measurement system performance over the life of a project.

Method evaluation work can be conducted as a stand-alone activity (e.g., a demonstration of method applicability study), or it can be integrated with other, on-going characterization activities. Systematic planning identifies and specifies the necessary site-specific performance characteristics required from proposed real-time measurement technologies. Performance parameters may include the standard measures of analytical quality (i.e., precision, bias, representativeness, selectivity, detection capabilities, comparability, etc.) as well as operational characteristics important for project success (e.g., turn-around times, throughput, robustness in the context of expected environmental conditions, etc.). Evaluation work can involve a significant level of effort if the proposed measurement technique is truly novel and the roles it will play are critical to project success. Alternatively, evaluation activities may be limited if a technology has a fairly well-established track record already, and/or deviations from expected performance will not have a significant impact on project outcomes.

Method evaluation activities that take place once a measurement is deployed are synonymous with proper QA/QC protocols. The distinction for the Triad is that for non-standard techniques, particularly those that may be prone to interference, matrix effects, or environmental impacts when deployed, there is a greater need to assure that technology performance is as expected as work proceeds. A primary component of evaluation activities is the acquisition of split analyses that allow a comparison of real-time measurement results with analyses that would be considered more definitive and rigorous. The purpose of comparison samples is not necessarily to establish a quantitative regression relationship between results from the two techniques. Evaluation work conducted prior to technology deployment should have already determined that the real-time measurement technique, when it performs as expected, provides decision-making value. Instead, the purpose of comparisons is to identify deviations from expected behavior that may be indicative of performance problems.

The exact nature of method evaluation work will be technology and site-specific. In particular, one should avoid a "standardized" approach to evaluation program design (e.g., selecting 10% of samples for comparison purposes) that lacks a site and technology-specific technical basis and gives a false sense of QA/QC security. In fact there will be advantages to modifying QC frequencies through the life of a program (e.g., front-end loading evaluation efforts and/or allowing the frequency to increase or decrease in response to increasing/decreasing levels of confidence in measurement system performance as work proceeds). A well-designed evaluation program will identify characteristics that would identify a sample as valuable for evaluation purposes. Generic examples include significant variations in sample matrix make-up, real-time results that fall into critical ranges from a decision-making perspective (e.g., near investigation levels), evidence of the presence of new or different contaminants of concern, soil moisture contents that deviate significantly from expected conditions, etc.

Collaborative Data Sets

A fundamental project question that must be answered is how data within a Triad collaborative data set will be used to support decision-making. Approaches can be as simple as treating data from different sources separately for decision-making purposes (a weight-of-evidence approach), or as complicated as using sophisticated geo-statistical routines to quantitatively blend data from different sources with different levels of spatial density, analytical quality, and inter-method correlation. One important point from a project management perspective is that the process of systematic planning should identify the forms of data analysis that will be used before data collection begins. If a demonstration of method applicability is used at a site, one of the outcomes should be the opportunity to test and optimize these data analysis procedures.

A second important point is that the way samples are acquired, handled, and prepared for analysis can have a dramatic impact on the ability (or inability) to collaboratively use resulting data sets for decision-making purposes. Sample support, homogenization, preservation, extractions performed (if necessary), cleanup procedures, etc., all can significantly affect data comparability, which in turn is tightly linked to the options program managers will have for collaborative analyses.





Home | Overview | Triad Management | Regulatory Info | User Experiences | Reference/Resources
News | Glossary | Document Index | Acronyms | FAQs
Privacy/Security | Site Map | Contact Us