Skip to content
BY 4.0 license Open Access Published by De Gruyter November 9, 2022

Time to address quality control processes applied to antibody testing for infectious diseases

  • Wayne J. Dimech EMAIL logo , Giuseppe A. Vincini ORCID logo , Mario Plebani ORCID logo , Giuseppe Lippi ORCID logo , James H. Nichols and Oswald Sonntag

Abstract

As testing for infectious diseases moves from manual, biological testing such as complement fixation to high throughput automated autoanalyzer, the methods for controlling these assays have also changed to reflect those used in clinical chemistry. However, there are many differences between infectious disease serology and clinical chemistry testing, and these differences have not been considered when applying traditional quality control methods to serology. Infectious disease serology, which is highly regulated, detects antibodies of varying classes and to multiple and different antigens that change according to the organisms’ genotype/serotype and stage of disease. Although the tests report a numerical value (usually signal to cut-off), they are not measuring an amount of antibodies, but the intensity of binding within the test system. All serology assays experience lot-to-lot variation, making the use of quality control methods used in clinical chemistry inappropriate. In many jurisdictions, the use of the manufacturer-provided kit controls is mandatory to validate the test run. Use of third-party controls, which are highly recommended by ISO 15189 and the World Health Organization, must be manufactured in a manner whereby they have minimal lot-to-lot variation and at a level where they detect exceptional variation. This paper outlines the differences between clinical chemistry and infectious disease serology and offers a range of recommendations when addressing the quality control of infectious disease serology.

Introduction

Laboratory staff are obliged to follow local and international guidance documents, laws and regulations and auditors will review laboratory practices against these requirements [1], [2], [3], [4], [5]. Guidelines relating to quality control (QC) were developed and implemented for clinical chemistry from the 1950s. Since this time, infectious disease testing has moved from manual, labor intensive test systems such as hemagglutination inhibition and complement fixation, onto more automated testing platforms used in clinical chemistry, which have been adapted for the detection of infectious disease antibodies and antigens. Many larger clinical laboratories have introduced the concept of a “core laboratory”, where samples for a range of analytes, including infectious diseases, are tested on the platforms linked by an automated “track” system. As these core laboratories are commonly overseen by clinical chemists, it is not surprising that methods used to standardize and control clinical chemistry tests were implemented for infectious disease testing [6].

However, there are some significant differences between infectious disease serology and testing for clinical chemistry analytes, and therefore the application of traditional QC methods cannot be universally applied [6, 7]. This document highlights the major differences between infectious disease serology and clinical chemistry testing, reviews the current paradigm for the application of QC methodology and makes recommendations on future approaches. For the purposes of this document the term “QC” relates to testing and monitoring of control materials; QC samples provided by the manufacturer (kit controls) or a non-manufacturer QC provider (third-party control).

Differences between clinical chemistry and infectious disease serology testing

There are significant differences between the measurement of clinical chemistry measurands and infectious diseases serology (Table 1) [6]. The underlying reason for these differences is that, when testing for an inert chemical such as glucose in (body) fluids, the test system is determining the actual quantity (how much) glucose is present (i.e., the concentration). In contrast, when testing for antibodies, the test system is determining the binding (how well) of antibodies to a given antigen. A patient sample with low levels of antibodies but high affinity and avidity to a specific antigen, could have higher level of reactivity compared with a sample with high concentration of low-avidity antibodies.

Table 1:

Differences between clinical chemistry and infectious disease serology testing.

Clinical chemistry Infectious disease serology
“Type A” inert analyte
  1. Known molecular structure

  2. Known molecular weight

  3. Invariable composition

  4. No change over time

“Type B” functional biological analyte
  1. Variable structures

  2. Different classes and subclasses

  3. Antibody response varies over time

  4. Antibodies may be fragmented, polyclonal or monoclonal, free or complexed

  5. Variable avidity and affinity

Several medical decision points
  1. e.g. hyper- and hypo-glycaemia

Single decision point
  1. Determining the absence of presence of antibodies and/or positivity or negativity in comparison to a cut-off

Quantitative
  1. Determining absolute amount of measurand (i.e., concentration)

Qualitative
  1. Determining binding efficiency

  2. Use chemical signal to detect measurand

Single homogeneous molecule
  1. No or minimal heterogeneity

  2. Test systems developed for specific molecular composition

Multiple and varying antigens
  1. Different genotypes/serotypes

  2. Antigenic mutations

Lower level of regulation
  1. Generally low-risk analytes

  2. Classified by regulators as Class B (2) or C (3) as low risk to community

Highly regulated
  1. Generally moderate to high risk

  2. Classified by regulators as Class C (3) or D (4) indicating high risk to the individual and the community

Linear dose response curve
  1. Usually highly sensitive tests detect low concentrations of analyte

  2. Assay demonstrates a linear response of concentration to signal throughout the analytical measurement range

Non-linear dose response curve
  1. No response if analyte concentration is low

  2. No increase in response if test system is saturated

  3. Strength of signal is dependent on affinity of the antigen:antibody binding

Adjust for reagent lot variation (bias)
  1. Can re-calibrate test system to adjust for bias

  2. Calibrators traceable to international standard available

Cannot adjust for reagent lot variation (bias)
  1. Tests are highly regulated not allowing modification

  2. A limited number of international standards available

  3. Modest commutability to international standards

International standards available
  1. Well-defined international standards available for many analytes

  2. Secondary standards are traceable to international standard

Poor or no international standards
  1. International standards unavailable for many tests

  2. Where international standard is available, standardisation efforts are mainly unsuccessful

  3. Many tests are not calibrated to international standard even when they exist

Certified reference methods (CRM)
  1. Well established CRM

  2. e.g. Atomic absorption, HPLC, mass spectrometry

No certified reference methods
  1. No CRM available

  2. Variable quantitative results between test systems

The experience with severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has highlighted the differences between test systems that detects antibodies against the same virus [8]. The multitude of serology tests for SARS-CoV-2 vary in the antibody detected (binding antibodies include IgG only, IgM only, IgG and IgM, total including IgA, dimeric IgA); in antigens (whole disrupted virus, recombinant spike protein and/or nucleocapsid protein); in the structure of the antigen derived from one or more SARS-CoV-2 variants and/or lineages (most tests still use antigens derived from the ancestral virus identified in Wuhan in 2019, which is no longer circulating) and in chemistries (rapid tests, colorimetric microtiter enzyme immunoassays, chemiluminescence, plaque neutralization). None of the serological assays for SARS-CoV-2 could be considered to test for the same measurand. This situation is the case for all serological assays in infectious diseases, including HIV, hepatitis and vaccine preventable diseases [6].

Unlike testing for quantitative analytes, infectious disease serology is not quantifying the numbers of antibodies in a patient sample. The test systems compare the intensity of the chemical reaction against a pre-determined cut-off; generally reporting the results as a signal to cut-off (S/Co) value or equivalent. It should be noted that, although the dose response curve is often linear, the signal is not necessarily indicative of the “amount” of antibodies present, just the intensity of the binding. The level of antibody response to infection often declines over time. However, individuals have memory B cells that elicit strong and specific antibody responses on re-exposure to the same or similar antigens (e.g. spike protein belonging to different SARS-CoV-2 strains). Therefore, the detection of binding antibodies does not necessarily correspond to the degree of protective immunity [9].

Regulation of infectious disease tests

The International Medical Device Regulators Forum (IMDF), along with the Global Harmonization Task Force (GHTF) classifies in-vitro diagnostic devices (IVD) into four risk categories [10]. The assessment and regulation of IVDs in most countries now comply with these classifications. The highest risk IVDs (Class D in Europe; Class 4 in Australia) include testing for blood-borne infections such as hepatitis and HIV, irrespective whether pre-transfusion screening or clinical diagnosis [11]. In the USA, tests with an intended use for blood screening are regulated differently from those with a diagnostic intended use, however, are more highly regulated compared with clinical chemistry tests [12]. In Europe, anti-SARS-CoV-2 testing is classified as Class D, whereas in Australia it is classified as Class 3 (i.e. Class C in Europe). All other infectious disease tests are regulated at Class C or 3; whereas most clinical chemistry tests are Class B or 2. The IVD regulations of most countries require manufacturers of these high risk IVDs to provide extensive pre-market performance evidence and, in some cases, undergo independent performance evaluations [5]. In Europe, this evidence is assessed by a Notified Body operating under European Law, whereas in the USA, blood screening assays are regulated by Center for Biologics Evaluation and Research (CBER). Once released to the market, the user of the IVD must follow the manufacturer’s instructions for use (IFU) without modification, including testing manufacturer’s kit controls and using manufacturer’s kit control acceptance criteria. The user cannot make changes to the test system to adjust for bias. To deviate from the manufacturer’s IFU would be to use the test kit “off-licence” making it an “in-house assay”, which is illegal for high-risk IVDs in some jurisdictions. Prior to release for use, each new batch of high risk IVDs must be reviewed by the regulatory authority in Europe and the USA, in many cases through performance testing by an authorized independent laboratory such as Paul Ehrlich Institut.

Manufacturer-provided third-party controls are also IVDs and are regulated under the same IMDF and GHTF environment. In Australia, third party controls for infectious disease testing are Class 2 devices, whereas in Europe under the IVDR regulations, the QC samples are of the same risk as the assays they seek to monitor, and therefore are high risk (Class D) for blood-borne infections. In the USA, QCs designed for monitoring blood-borne infections are also regulated by CBER and require more stringent oversight.

The regulatory environment of infectious disease serology assays is therefore far more stringent than that of clinical chemistry reagents and restrictions to the use and modification of these assays limit the flexibility in the application of control methods.

Guidance documents

The dominant paradigm in establishing performance specifications (goals) for virtually all medical laboratory testing is the 2014/2015 Milan hierarchy [13, 14]. This consensus document proposes analytical performance specifications based on one of three models – a) the effect of analytical performance on clinical outcomes using direct or indirect studies; b) based on components of biological variation of the measurand and c) based on state-of-the-art. That is, using the highest level of analytical performance technically achievable. Of these options, only the first can be related to infectious disease serology, as the biological variability of measurands in serology is significant and changes over time, and there are no higher order test systems for infectious disease serology. Milan performance specifications are established as allowable imprecision, allowable bias, and/or allowable total analytical error, typically expressed as percentages.

Acceptance criteria, established externally by QC sample manufacturers, control material vendors or EQA providers using peer data, is derived from many participants contributing variation from multiple sources, such as different reagents and reagent lots, instruments, operators, and internal processes. Therefore, in infectious disease serology, only results from the same test system, testing the same material (peer group) can be used to establish acceptance criteria [15]. Even so, the acceptance range calculated from these data are, by definition, wider (sometimes much larger) than those calculated from data obtained from an individual laboratory using that test system, as the sources of variation are smaller coming from within a laboratory environment compared with those from multiple laboratories. A possible danger of using acceptance criteria established from the collective data of multi-laboratory peer groups is that the acceptance range may be so wide that it may no longer detect important unacceptable results.

Alternatively, establishing acceptance ranges from data obtained from an individual laboratory using a small number of data points to calculate a “too narrow” acceptance range can lead to “false rejections”, where acceptable QC results are flagged as being “errors”, thereby causing unnecessary concern and time and resource waste in troubleshooting. CLSI C24-A3 [1] and the UK Standards for Microbiology Investigations [16], for example, specifically advise laboratories to establish control limits using the mean and standard deviation (SD) on as few as 20 data points data derived from their own laboratory. ISO 15189 also strongly recommends that laboratories must individually design their QC procedures [17]. However, each of these approaches will lead to acceptance ranges that are too tight, resulting in false rejections in infectious disease serology settings.

The current version of ISO 15189 is non-specific regarding the use of quality control and silent regarding the establishment of control limits. It states that a medical laboratory shall design quality control procedures using quality control materials that should be periodically examined with a frequency based on the stability of test system and risk of harm to the patient. It also states that a third-party QC material should be considered in addition to, or instead of manufacturer’s control material. A laboratory should have a procedure for corrective action when QC rules are violated but does not specify what those rules should be.

The CLSI document “Laboratory Quality Control Based on Risk management” (EP23-A) describes a quality control process that considers the risk of failure of a testing system, rather than specifying analytical performance objectives [18]. The challenge for serology laboratory staff is determining how to establish specific limits based on these guidance documents, which rarely (if ever) accounts for the measurands being detected in infectious disease serology. Thus, current recommended best practices for QC in serology are vague and at odds with each other compared with other laboratory testing areas.

Kit control materials vs. third party control materials

Guidance documents for QC do not differentiate between the use of manufacture-provided kit control materials and non-manufacturer, third-party control materials [1, 4, 16]. This most likely is because, in clinical chemistry, the control materials provided by the manufacturer and a third-party provider can be interchanged when they are calibrated against a certified reference material [19]. In infectious disease serology, the kit control materials and third-party control materials serve different purposes. The kit control materials are optimized by the reagent manufacturer for their reagent, calibrator, sometimes for a particular reagent lot. Generally, the manufacturer specifies acceptance criteria for their kit controls in their IFU, which must be fulfilled before patient results can be reported. That is, the kit controls validate a test run. These kit control acceptance limits are usually very wide. It must be noted that IVDs have been assessed by regulatory authorities prior to being released for market approval using the manufacturer’s kit control acceptance criteria as specified in the IFU. If the kit control results fall within that range, the sensitivity and the specificity of the IVD are considered to be stable. Under the European Commission IVDR regulations, as well as Australia’s registration processes, the laboratory must follow the IFU, which includes using the kit control materials [3, 5]. Where using the kit controls is indicated by the IFU, replacing them with a third-party control material is generally prohibited, particularly in high risk IVDs such as HIV or hepatitis. However, kit controls are often manufactured in batches which have lot-to-lot variation, sometimes specifically to account for the reagent lot changes and give a reactivity within the manufacturer’s acceptance criteria. Where this occurs, kit controls are not appropriate for monitoring the performance of an assay over long periods of time.

Choice of third-party quality control materials

In contrast to manufacturer’s kit controls, third-party control materials monitor the variation in the test system over time and should not be used for the validation of a test run. By doing so, the test is being used “off-licence” outside the regulatory approval of the IVD. Effective monitoring of results of third-party QCs detects unexpected variation. A third-party control material should be optimized for the test system being monitored and, ideally, react on the linear part of the dose response curve (usually a low positive reactivity). In clinical chemistry, typically two or three control materials are run; a negative; a low positive (close to the clinical decision limit) and a high positive [20]. While all three levels may provide useful information in clinical chemistry, the low positive control nearest the critical decision level (being the cut-off of the assay) is recommended in infectious disease serology as it measures the variation at low levels of reactivity and is more sensitive to changes in the test system [15, 21].

All infectious disease serology tests experience reagent lot-to-lot variation, irrespective of the manufacturer or the analyte. Unlike some chemistry test systems, this variation cannot be eliminated by re-calibration. The challenge for laboratory scientists is to recognize and react to unacceptable variation. Monitoring the reactivity of same third-party control material over a long period of time is the preferred method for comparing reagent lot-to-lot variation, where the third-party control used is stable for that period [15]. This can be achieved by the laboratory purchasing large amounts of a single lot of third-party QC with a long expiry dating. Preferably, the third-party QC should be manufactured to exhibit minimal QC lot-to-lot variation. Control materials that have minimal lot-to-lot variation will allow the user to have continuous monitoring of IVD performance over a long period. The challenge for laboratories is to establish meaningful, evidence-based and scientifically robust acceptance criteria accounting for the risk of false results.

Establishing control limits for infectious disease serology

Traditional methods of establishing control limits use mean ±1, 2, 3 SD calculated on a relatively small number of results [1, 4, 16]. These methods do not consider normal reagent lot-to-lot variation although more recently, the issue of lot-to-lot variation in clinical chemistry has been recognized [22], [23], [24]. When a new lot of reagent becomes available previously established limits may no longer be valid and the acceptance criteria must be re-established [25, 26]. In some cases, laboratories can purchase a single lot of reagent and use this lot for a period of time, avoiding the necessity to re-establish the acceptance limits. But this approach is not possible for small laboratories that cannot hold significant stock or for low volume testing. A laboratory setting control limits using solely its own quality control results can only assess the imprecision of test system in their testing environment and is therefore subject to the challenges of frequent reagent lot changes. A more universal solution to reagent lot-to-lot variation is required [7].

Using QC results from a single laboratory cannot assess the bias or assess if the changes in QC mean are derived from reagent lot or laboratory-induced variation. Therefore, a laboratory will be unable to detect systematic errors, such as a signal reading low, incorrect incubation temperature or a failure to wash unconjugated antibodies. Only by participating in a peer-review QC program or EQA program can a laboratory truly estimate its bias.

To bring better alignment between serology and the Milan Consensus, performance specifications need to be established based on the effect of analytical variation on clinical outcomes, and then customize this knowledge into acceptance ranges for control limits.

For IVDs reporting results as a S/Co or similar, results of third-party control materials have a reasonably normal distribution over time, allowing labs to take advantage of parametric statistics. Pre-determined control limits can be provided to laboratories by third-party control vendors, but the vendor must undertake the additional work of adapting the performance specification to the laboratory’s individual context (method, instrument, reagent lot) before calculating and implementing the recommended range. Only a well-organized QC program with access to vast amount of different performance data on instruments, operators, and reagent lots can fulfill this need. Fortunately, some control material vendors with peer group programs collect these data and can use the historical data to establish meaningful quality control limits. The strength of a peer group control vendor is the wider perspective; the ability to track the universe of instruments, methods, antibodies and reagent lots; harvesting a wealth of data and insight that no individual laboratory can replicate. The peer group provider has access to more easily accessible data on any individual reagent lot, allowing comparison of results across laboratories, reagent lots and instruments, facilitating troubleshooting of unexpected QC results [27, 28].

Analytical performance and clinical outcomes

Laboratory staff often assume a decrease in low positive QC reactivity can predict changes in the clinical sensitivity of the assay, that would result in a false negative patient result. This approach involves some assumptions and raises questions. The main assumption is that the third-party control material and seroconverting samples are commutable. Note that a real, clinical low-positive reactivity arises only at the time of seroconversion. At this time the antibody response is immature, mostly dominated by IgM, and the avidity is generally low (note that serology assays measure how well binding occurs). The early antibody response against some specific proteins wane quickly. Indeed, an individual will have a low-level reactivity during a seroconversion event for no more than approximately 72 h [27]. In contrast, a low-positive third-party control material is typically manufactured by diluting plasma from chronically infected individuals, representing mature, highly avid antibodies. The National Serology Reference Laboratory, Australia (NRL) has mapped the commutability of third-party control materials and real negative [29], [30], [31] and low positive clinical samples [27]. In the assay/control combination studied (Abbott Architect anti-HCV), only clinical sample with a S/Co less than 2.3 experienced false negative results when tested on a few of the six reagent lots that were experiencing unexpected low reactivity. They concluded that the risk of a false negative clinical sample result associated with a significant decrease in controls reactivity can occur, but the risk was minimal and mitigated with molecular testing and/or clinical history.

Recommendations

Guidelines for analyzing controls have been developed for clinical chemistry but have been applied universally to serologic testing without validation of fitness-for-purpose. Infectious disease testing presents a different paradigm compared to clinical chemistry analytes that are homogeneous and well-defined. Serology tests are essentially qualitative, indicating the presence or absence of antibodies. The S/Co value or equivalent units reported is not a measure of the quantitative amount of antibodies, but an indication of the ability of antibodies present to bind to particular proteins. Of note, unlike the majority of clinical chemistry tests [24], infectious disease serology assays experience normal lot-to-lot variation [15]. Our document provides several recommendations to be included in a serology quality control guideline.

  1. Each laboratory must comply with current international and national regulatory requirements for serological testing.

  2. Infectious disease serology tests do not measure a quantity or concentration of a specific analyte. They are qualitative test systems that produce a signal to determine the presence or absence of the measurand.

  3. Each test for an organism (e.g. SARS-CoV-2) is detecting different measurands, such as antibodies to spike and/or nucleocapsid proteins; employ different conjugates (e.g. mouse monoclonal, human polyclonal); detect different classes or subclasses of antibody (IgG, IgM, IgA, total) and use different chemistries to detect the signal. Therefore, quantitative signal results from one assay cannot be compared to another assay purporting to detect antibodies to the same organism.

  4. If specified by the manufacturer in the IFU, the manufacturer kit control materials should always be used to validate the assay performance as instructed. The use of kit controls allow reagent manufacturers to troubleshoot issues.

  5. When kit controls are within the manufacturer’s acceptance range specified in the IFU, changes in the mean of reactivity of a third-party control material is unlikely to change the sensitivity or specificity of an assay and third-party QCs should not be used for this purpose.

  6. The testing and monitoring of third-party control materials for infectious disease testing is highly recommended as an industry standard.

  7. Unlike kit control materials, which validate a test run to allow for the release of patient results, well-designed third-party control materials can monitor the test system performance over time.

  8. Well-designed third-party control materials should be optimized for specific immunoassays, have minimal lot-to-lot variation and be stable for long periods of time.

  9. Although different levels of control materials may be used, at least one third-party QC should have reactivity on the linear part of the dose response curve, which is not necessarily a level close to the test cut-off.

  10. At a minimum, a third-party control material should be analyzed each morning prior to testing patient samples on automated chemistry analyzers, or with each test run when batch testing. Ideally the third-party QC material should be analyzed at each change of shift, assay calibration, after a significant maintenance event or any other situation which may reasonably introduce a change to the test system.

  11. QC materials should be periodically updated and validated according to the emergence of new viral variants and introduction of new immunoassays in the market.

  12. Variation in lot-to-lot changes of reagents, as detected by third-party QC results, is normal. Acceptance criteria must be designed to differentiate normal and unacceptable lot-to-lot reagent variation.

  13. Current quality control guidelines assume minimal reagent lot-to-lot variation and therefore provide no useful direction to laboratories when lot-to-lot variation occurs.

  14. Using the mean ± 2 SD of 20 results to establish quality control acceptance limits, as recommended by clinical chemistry guidelines, does not account for “normal” reagent lot-to-lot variation, and will lead to many false rejections for infectious disease serology tests, which cause wasted resources and decease confidence in either the assay and/or the quality controls.

  15. Laboratories monitoring quality control results obtained only from testing in their laboratory will monitor test precision but will not detect or monitor bias or systematic error.

  16. Acceptance criteria for infectious disease test quality control results should include “normal” variation, including reagent lot-to-lot variation over time. Historical data obtained from peer comparison programs is ideal in understanding normal variation and can be used for establishing evidence-based acceptance limits.

  17. Providers of quality control programs with peer comparison that collect and analyze control results from multiple laboratories using the same control materials and test system gives the laboratory user greater opportunity to investigate unexpected results.

  18. The imprecision of infectious disease serology assays is generally stable from lot to lot, even if the mean of control reactivity changes. This performance metric could be a useful tool for laboratories when monitoring assay performance.

  19. Total Allowable Error of infectious disease serology assays is difficult to define as no clinical error is acceptable. However, historical data may be utilized to estimate the inherent analytical error (imprecision and bias) and determine if that variation leads to acceptable clinical outcomes, based on the risk of false patient results.

  20. Current medical test quality control guidelines should be reviewed to exclude infectious disease testing and specific guidelines for quality control of infectious disease should be developed.


Corresponding author: Wayne J. Dimech, Scientific and Business Relations Manager, National Serology Reference Laboratory, Australia, 4th Floor Healy Building, 41 Victoria Parade, Fitzroy, 3065, Australia, E-mail:

  1. Research funding: None declared.

  2. Author contributions: All authors have accepted responsibility for the entire content of this manuscript and approved its submission.

  3. Competing interests: Authors state no conflict of interest.

  4. Informed consent: Not applicable.

  5. Ethical approval: Not applicable.

References

1. CLSI. Statistical quality control for quantitative measurement procedures: principles and definitions. In: CLSI guideline 2016. Wayne PA: Clinical and Laboratory Standards Institute.Search in Google Scholar

2. Public Health England (PHE). UK Standards for Microbiology Investigations: UK Standards for Microbiology Investigations, Standards Unit National Infection Service. London, UK: Public Health England; 2021.Search in Google Scholar

3. National Pathology Accreditation Advisory Council. Requirements for Quality Control, External Quality Assurance and Method Evaluation. Canberra, Australia: Commonwealth of Australia, Department of Health; 2018.Search in Google Scholar

4. Revision of the “Guideline of the German Medical Association on Quality Assurance in Medical Laboratory Examinations – Rili-BAEK” (unauthorized translation). J Lab Med 2015;39:26–69.10.1515/labmed-2014-0046Search in Google Scholar

5. European Parliament and the Council of the European Union. Regulation (EU) 2017/745 of the European Parliament and of the Council on Medical Devices. Off J Eur Union 2017;60:1–157.Search in Google Scholar

6. Dimech, W. The standardization and control of serology and nucleic acid testing for infectious diseases. Clin Microbiol Rev 2021;34:1–16. https://doi.org/10.1007/s00769-012-0950-y.Search in Google Scholar

7. Galli, C, Plebani, M. Quality controls for serology: an unfinished agenda. Clin Chem Lab Med 2020;58:1169–70. https://doi.org/10.1515/cclm-2020-0304.Search in Google Scholar PubMed

8. Infantino, M, Damiani, A, Gobbi, FL, Grossi, V, Lari, B, Macchia, D, et al.. Serological assays for SARS-CoV-2 infectious disease: benefits, limitations and perspectives. Isr Med Assoc J 2020;22:203–10.Search in Google Scholar

9. Perry, J, Osman, S, Wright, J, Richard-Greenblatt, M, Buchan, SA, Sadarangani, M, et al.. Does a humoral correlate of protection exist for SARS-CoV-2? A systematic review. PLoS One 2022;17:e0266852. https://doi.org/10.1371/journal.pone.0266852.Search in Google Scholar PubMed PubMed Central

10. International Medical Device Regulators Forum. Principles of in vitro diagnostic (IVD) medical devices classification. In: IMDRF/IVD WG/N64FINAL; 2021. Available from: https://www.imdrf.org/sites/default/files/docs/imdrf/final/technical/imdrf-tech-wng64.pdf.Search in Google Scholar

11. Australian Government. Classification of IVD Medical Devices. Canberra: Department of Health Therapeutic Goods Administration, Editor; 2015. Available from: https://www.imdrf.org/sites/default/files/docs/imdrf/final/technical/imdrf-tech-wng64.pdf.Search in Google Scholar

12. U.S Food and Drug Administration. Overview of IVD Regulation; 2021. https://www.fda.gov/medical-devices/ivd-regulatory-assistance/overview-ivd-regulation [Accessed 16 Sept 2022].Search in Google Scholar

13. Panteghini, M, Sandberg, S. Defining analytical performance specifications 15 years after the Stockholm conference. Clin Chem Lab Med 2015;53:829–32. https://doi.org/10.1515/cclm-2015-0303.Search in Google Scholar PubMed

14. Sandberg, S, Fraser, CG, Horvath, AR, Jansen, R, Jones, G, Oosterhuis, W, et al.. Defining analytical performance specifications: consensus statement from the 1st strategic conference of the European Federation of Clinical Chemistry and Laboratory Medicine. Clin Chem Lab Med 2015;53:833–5. https://doi.org/10.1515/cclm-2015-0067.Search in Google Scholar PubMed

15. Dimech, W, Vincini, G, Karakaltsas, M. Determination of quality control limits for serological infectious disease testing using historical data. Clin Chem Lab Med 2015;53:329–36. https://doi.org/10.1515/cclm-2014-0546.Search in Google Scholar PubMed

16. Public Health England. Quality assurance in the Diagnostic Virology and Serology Laboratory. London, UK: Standards Unit Microbiology Services; 2021.Search in Google Scholar

17. Späth, P, Hoffmann, D, Spitzenberger, F. Influence of DIN EN ISO 15189 on the correctness of results in clinical virology. J Lab Med 2016;40:155–64. https://doi.org/10.1515/labmed-2016-0037.Search in Google Scholar

18. CLSI. A laboratory quality control based on Risk management: approved guidelines. In: CLSI guideline 2011. Wayne PA: Clinical and Laboratory Standards Institute.Search in Google Scholar

19. Andreis, E, Kullmer, K, Appel, M. Application of the reference method isotope dilution gas chromatography mass spectrometry (ID/GC/MS) to establish metrological traceability for calibration and control of blood glucose test systems. J Diabetes Sci Technol 2014;8:508–15. https://doi.org/10.1177/1932296814523886.Search in Google Scholar PubMed PubMed Central

20. Westgard, JO. Internal quality control: planning and implementation strategies. Ann Clin Biochem 2003;40:593–611. https://doi.org/10.1258/000456303770367199.Search in Google Scholar PubMed

21. Dimech, W, Walker, S, Jardine, D, Read, S, Smeh, K, Karakaltsas, K, et al.. Comprehensive quality control programme for serology and nucleic acid testing using an internet-based application. Accred Qual Assur 2004;8:148–51. https://doi.org/10.1007/s00769-003-0734-5.Search in Google Scholar

22. Plebani, M, Zaninotto, M. Lot-to-lot variation: no longer a neglected issue. Clin Chem Lab Med 2022;60:645–6. https://doi.org/10.1515/cclm-2022-0128.Search in Google Scholar PubMed

23. van Schrojenstein Lantman, M, Cubukcu, HC, Boursier, G, Panteghini, M, Bernabeu-Andreu, FA, Milinkovic, N, et al.. An approach for determining allowable between reagent lot variation. Clin Chem Lab Med 2022;60:681–8. https://doi.org/10.1515/cclm-2022-0083.Search in Google Scholar PubMed

24. Miller, WG, Erek, A, Cunningham, TD, Oladipo, O, Scott, MG, Johnson, RE. Commutability limitations influence quality control results with different reagent lots. Clin Chem 2011;57:76–83. https://doi.org/10.1373/clinchem.2010.148106.Search in Google Scholar PubMed

25. Cho, MC, Kim, SY, Jeong, TD, Lee, W, Chun, S, Min, WK. Statistical validation of reagent lot change in the clinical chemistry laboratory can confer insights on good clinical laboratory practice. Ann Clin Biochem 2014;51:688–94. https://doi.org/10.1177/0004563214520749.Search in Google Scholar PubMed

26. Thompson, S, Chesher, D. Lot-to-lot variation. Clin Biochem Rev 2018;39:51–60.Search in Google Scholar

27. Dimech, WJ, Vincini, GA, Cabuang, LM, Wieringa, M. Does a change in quality control results influence the sensitivity of an anti-HCV test? Clin Chem Lab Med 2020;58:1372–80. https://doi.org/10.1515/cclm-2020-0031.Search in Google Scholar PubMed

28. Kim, J, Swantee, C, Lee, B, Gunning, H, Chow, A, Sidaway, F, et al.. Identification of performance problems in a commercial human immunodeficiency virus type 1 enzyme immunoassay by multiuser external quality control monitoring and real-time data analysis. J Clin Microbiol 2009;47:3114–20. https://doi.org/10.1128/JCM.00892-09.Search in Google Scholar PubMed PubMed Central

29. Dimech, W, Freame, R, Smeh, K, Wand, H. A review of the relationship between quality control and donor sample results obtained from serological assays used for screening blood donations for anti-HIV and hepatitis B surface antigen. Accred Qual Assur 2013;18:11–8. https://doi.org/10.1007/s00769-012-0950-y.Search in Google Scholar

30. Wand, H, Dimech, W, Freame, R, Smeh, K. Visual and statistical assessment of quality control results for the detection of hepatitis B surface antigen among Australian blood donors. Ann Clin Lab Res 2015;3:1–8. https://doi.org/10.21767/2386-5180.10008.Search in Google Scholar

31. Wand, H, Dimech, W, Freame, R, Smeh, K. Identifying the critical cut-points of a quality control process for serological assays: results from parametric and semiparametric regression models. Accred Qual Assur 2017;22:191–8. https://doi.org/10.1007/s00769-017-1265-9.Search in Google Scholar

Received: 2022-10-02
Accepted: 2022-10-26
Published Online: 2022-11-09
Published in Print: 2023-01-27

© 2022 Wayne J. Dimech et al., published by De Gruyter, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 27.4.2024 from https://www.degruyter.com/document/doi/10.1515/cclm-2022-0986/html
Scroll to top button