Obtaining a calibration in which we have confidence
As mentioned above, to calibrate an electronic manometer we can borrow, purchase or hire a similar or superior manometer and a pressure source, and perform a comparison of the two manometers over the pressure range of interest. Our recent experience in measuring a pressure that appears to be very close to a safety limit motivates us to attempt to estimate at each calibration point the range within which the true pressure is likely to lie. This estimate is called an uncertainty estimate . In this case quantitative analysis of the uncertainty associated with the comparison, however, is immediately frustrated by our lack of confidence in the output of the manometer we are using as a reference, and the unknown uncertainty associated with that instrument. If we attempt a more complete uncertainty analysis we may come across other factors that are not controlled or monitored during the comparison, e.g. fluctuations in the source pressure, environmental temperature, humidity and barometric pressure. If we are honest with ourselves we soon appreciate that to truly gain confidence in our electronic manometer we require at least the following conditions:
1. We should have a high level of confidence in the reference manometer :
It should have been calibrated recently against a superior instrument we trust by technicians we trust
It should have a well known uncertainty
2. The conditions under which the comparisons are performed should be well controlled and monitored.
3. The technician doing the comparison should have the technical competence and experience to enable him/her to identify and control external factors that might affect the comparisons.
4. The calibration process should deliver one of :
Assurance that the manometer is performing within the manufacturer’s specifications
Uncertainty estimates associated with each calibration point indicating the range within which the true pressure lies with an appropriate level of confidence.
Calibration can be defined clearly as:
‘A set of operations that establish, under specified conditions, the relationship between values of quantities indicated by a measuring instrument or measuring system, or values represented by a material measure or a reference material, and the corresponding values realised by standards’.
A product of a formal calibration is usually a calibration report including a table containing a set of reference values in which the calibration lab has a high level of confidence, and the corresponding values indicated by the device under test.
Obtaining a calibration in which we have confidence:
To calibrate our manometer in a manner that fulfills our requirements as enumerated above we have two options.
1. Perform the calibration ourselves. In this way we have full control over all the technical and quality aspects of the calibration process.
2. Request an independent laboratory to perform the calibration but audit that laboratory thoroughly to ensure that they have reference instruments in which we have confidence, a controlled environment in which to perform the calibration, competent technicians who understand our requirements well, and procedures for producing error-free calibration reports.
Options (1) and (2) above are feasible under limited circumstances. Maintaining our own dedicated calibration lab, however, is time-consuming and costly. We also soon discover that if we calibrate our instruments ourselves that our customers start auditing us to verify that we are competent, doing the job properly and keeping proper records etc. We are likely to find that managing audits of our labs and regularly auditing other laboratories we use is time consuming and costly.
After a little honest thought we come to the conclusion that we (and probably many other organisations who regularly make measurements in which a high level of confidence is required) would benefit from a national or international system which would give us confidence in calibration laboratory services. A system that provides confidence intervals around our critical measurements would be extremely valuable.
ISO 17025 is an international standard governing most of the important aspects of calibration processes. Laboratories who meet this standard should operate a quality control system, be technically competent and be capable of producing technically valid results. The intention of ISO 17025 is to provide a functional system or hierarchy of calibration laboratories in which we can have confidence. Any calibration performed by an ISO 17025 accredited lab should:
1. be performed by competent technicians in a controlled environment
2. use reference instruments or materials in which we can have confidence
3. operate an administrative quality system similar to ISO 9001
Mutual Recognition Agreements (MRA’s)
It is agreements between accrediting authorities in different countries extend the hierarchy of trusted laboratories to a worldwide pyramid-shaped structure, which has BIPM in Paris at the apex. Traceability to SI units also ensures that measurements that we make in Sydney, Australia can be compared with similar traceable measurements made in many other countries.
Once our inspection tools has been calibrated how long can we trust its performance?
Based on its utility our instrument may be exposed to vibration, varying temperatures, humidity etc during storage and transport. After the initial calibration (which in some cases is performed by the manufacturer) have no information concerning its drift and response to normal handling. Only after the second and (preferably) subsequent calibrations do we have information from which we can deduce whether or not the performance of the instrument between calibrations is adequate.
Calibration interval is an aspect of calibration that can be critically important to the validity of measurements and confidence intervals, but is highly instrument-specific and hence is not covered by a general standard like ISO 17025. Many manufacturers recommend calibration intervals (often one year) for their instruments. In practice, however, the user should determine the calibration interval based on analyses of successive calibration reports, the costs of calibration, the manner in which the instrument is stored and treated during normal use, and the consequences of out-of specification measurements.