Skip to content
Publicly Available Published by De Gruyter February 10, 2021

The impact of interventions applied in primary care to optimize the use of laboratory tests: a systematic review

  • Serena Lillo EMAIL logo , Trine Rennebod Larsen , Leif Pennerup and Steen Antonsen

Abstract

Laboratory tests are important tools in primary care, but their use is sometimes inappropriate. The aim of this review is to give an overview of interventions applied in primary care to optimize the use of laboratory tests. A search for studies was made in the MEDLINE and EMBASE databases. We also extracted studies from two previous reviews published in 2015. Studies were included if they described application of an intervention aiming to optimize the use of laboratory tests. We also evaluated the overall risk of bias of the studies. We included 24 studies. The interventions were categorized as: education, feedback reports and computerized physician order entry (CPOE) strategies. Most of the studies were classified as medium or high risk of bias while only three studies were evaluated as low risk of bias. The majority of the studies aimed at reducing the number of tests, while four studies investigated interventions aiming to increase the use of specific tests. Despite the studies being heterogeneous, we made results comparable by transforming the results into weighted relative changes in number of tests when necessary. Education changed the number of tests consistently, and these results were supported by the low risk of bias of the papers. Feedback reports have mainly been applied in combination with education, while when used alone the effect was minimal. The use of CPOE strategies seem to produce a marked change in the number of test requests, however the studies were of medium or high risk of bias.

Introduction

Laboratory tests are essential for screening, diagnosis and monitoring of diseases and are extensively used both in hospitals and primary care. It is generally recognized that the number of laboratory tests are increasing far more than the number of patients. The increased use of tests probably have benefited some patients through earlier diagnosis and treatment, however, their contribution to the quality of care also is debatable. This is especially the case when the usage is not based on clinical need and scientific evidence, but rather on habits, defensive medicine or similar causes [1].

Furthermore, the disproportional growth in laboratory utilization is considered economically unsustainable [2] and therefore it is important that the right tests are used in the right situation for the right patients. Thus, for all of these reasons a variety of interventional strategies have been investigated with respect to their capability to optimize the use of laboratory tests.

Primary care physicians are first in line to meet patients in the healthcare system, and they are responsible for a considerable proportion of the biochemical tests requested. Nevertheless, this sector is less well studied compared to the hospital setting regarding strategies to improve the application of laboratory tests in clinical situations.

To our knowledge, there are only two reviews [3], [4], both published in 2015, which have evaluated different interventions in terms of improving the use of laboratory tests in the primary care sector. Surprisingly, despite covering the same period, the two reviews have only four references in common of a total of seventeen.

In this review, we give an updated overview of studies of different interventions aiming to optimize clinical use of laboratory tests by primary care physicians, taking the quality of the published studies into account. We summarize and compare the outcomes of different types of interventions with regards to the number of tests ordered, as well as changes in health care expenditures when reported by the authors.

Methods

Search strategy

The search was implemented in two electronic databases: Medline (1st January 2010–20th September 2019) and Embase (1st January 2010–20th September 2019). The collection period was limited from 2010-to the present due to the two “2015 reviews” [3], [4] considered to cover the previous period (1947–February 2014) and because this review is meant to give an overview of the interventions applied recently, since the use of the IT-requesting systems has supplanted the use of paper request forms used in the past. Nevertheless, studies reported in the two 2015 reviews which fulfill our inclusion criteria were also included and evaluated in the present review.

No limits on language, subject or type were placed on the database searches. The searches were designed to be comprehensive and to cover a broad range of the field, including the hospital setting, to ensure that all types of connections or collaborations with primary care appeared in the search. The search query consisted of terms considered important by the authors to describe the subject of the review. The search was tailored to the specific requirements of each database (Supplemental Material 1).

All citations were imported into the Endnote X9 citation manager with duplicate citations being removed automatically or manually when found later in the process. Citations were then imported into the web-based review software program Covidence [5] for subsequent title and abstract relevance screening and data characterization of the full articles.

All potential papers underwent a two-step screening process: (1) all citations were screened based on the title and abstract by two independent reviewers (SL, SA); (2) the full text of each citation was read and assessed for inclusion by the same two reviewers independently. Disagreements were solved by discussion.

Criteria

Studies were included in the review based on the following inclusion and exclusion criteria:

Inclusion criteria

Essential

  1. the setting in which the intervention(s) were applied was primary care.

  2. one or more specific intervention(s) was applied in order to change the use of one or more laboratory tests;

  3. effectiveness of the intervention(s) was reported by difference in number of produced test(s) with and without the intervention;

Additional

  1. differences in test-related expenditure or healthcare costs with and without the intervention was reported.

Exclusion criteria

  1. no intervention was applied;

  2. interventions focused on other care settings than primary care;

  3. interventions were applied for other types of investigations than biochemical, immunological, genetic or microbiological laboratory tests;

  4. quantitative results were not reported or could not be extracted from the paper.

Data collection

For each study reported in this review, we extracted information regarding the authors, country, year of publication, evaluation period, type of interventional strategy(ies) used, laboratory test(s) evaluated, study design and outcomes in terms of:

  • number of tests requested before applying intervention(s);

  • number of tests requested after applying intervention(s)

  • difference in number of tests before and after intervention(s)

We evaluated the quality of the studies as the overall risk of bias translated to “high risk of bias”, “medium risk of bias” or “low risk of bias”, based on the Cochrane Effective Practice and Organization of Care (EPOC) suggested risk of bias [6] (Supplemental Material 2).

The study designs included: Randomized control trials (RCT) and cluster randomized controlled trials (CRCT); non-randomized control trials such as before and after controlled studies (CBA); before and after studies and prospective studies lacking a control group or control test (BA); and interrupted time series analysis (ITS) [7].

All the information was stored and handled in an Access database 2007–2010.

Calculations

The effectiveness of the interventions was reported as a relative number, corresponding to the effect on the group of tests following the application of the intervention.

Studies describing more than one intervention, were reported in this review with two or more results.

When the difference in tests used was reported as percentages, the study result was used directly (for example 20% being identical to 0.20).

Otherwise, if the results of a comparator (control group or test) was also reported in the study (before and after the intervention was applied), we adjusted the change in the intervention group with the change in the comparator before further calculations:

1. Calculating adjustment for intervention group before, based on changing in comparator:

Δ Comparator=comparator after-comparator before
Adjusted intervention group before = Intervention group before +Δ comparator

2. Calculating relative change in intervention group adjusted by the change in the comparator:

(Intervention group after− Adjusted interventiongroup before)/Adjusted intervention group before

In case of the intervention applied on more tests in the same study, a weighted relative change was calculated based on the absolute number of the individual tests both for the comparator (before and after) and the intervention group (before and after). The weight of each tests was simply the number of the individual test:

Weighted result = Σ [changes (adjusted) in intervention group * number of test intervention group (after)]/total number of tests in intervention group (after)

Types of intervention

The interventions used in the included studies were categorized as following:

Educational strategies (EDU)

Education: traditional education such as lectures, production and distribution of guidelines, audit, or guidelines integrated within clinical decision support systems (CDSS).

Cost display: cost of the test(s) is displayed in the requisition system or otherwise made clear to the requester during the requesting.

Feedback reports

Preparation of individualized reports showing the numbers of specific tests requested and/or graphic displays, most often compared with other requesters and/or previous time periods.

Computerized physician order entry (CPOE) strategies

Administrative changes: changes to the test requisition forms in the IT-System, such as changing order forms, making new order profiles or unbundling panels.

Soft-blocking: IT-based strategies that provide an interruptive “pop-up” alert that stop the workflow by requiring the user to acknowledge the alert or to cancel the ordering of the test.

Hard-blocking: a gate-keeping intervention that is used when a test is considered inadequate or repeated without clinical justifications and is therefore not processed – for example blocks caused by minimal retesting intervals. A special type of hard-blocking is reflex testing where laboratory diagnostic algorithms automatically add orders of specific tests to a request or alternatively blocks the following test depending on the result of the initial test.

Results

Study selection

The electronic searches produced 9,985 studies (4,126 in Medline and 5,859 in Embase). After removing duplicates, 9,287 were screened at the title and abstract level. Four hundred and twenty-five articles were selected for full-text screening and of those, 332 studies were eliminated as they were not reporting original quantitative results relevant for the review topic. Eighty studies were excluded because the interventions were not applied to primary care.

The electronic search ended with 13 studies that met our pre-specified criteria and were included in the review. Seventeen studies were extracted from the two 2015 reviews [3], [4] and of those, six studies were excluded: three due to a different focus area, two because they were also included in our own search and one because no clear results were reported.

The selection process left a final pool of 24 studies (Figure 1) that were published between 1992 and September 2019.

Figure 1: PRISMA Flow chart.Studies selection.
Figure 1:

PRISMA Flow chart.

Studies selection.

Study characteristics

Table 1 summarizes the characteristics of the included studies. Five studies implemented more than one strategy to reduce the number of tests either to different pools of tests [8], [17], [26] or to the same pool of tests [12], [19] and therefore, the results of those studies have been split into two or three parts, respectively. Two studies used the same interventions in two different laboratory areas, and the results were therefore also reported in two sections [11], [28].

Table 1:

Studies characteristics of the references reporting intervention applied for the over-use of tests.

ReferencesAuthor-country-yearDesignIntervention designed in collaboration with requestorInterventionComparatorDuration periodTestsCost evaluationRelative reduction of targeted testsOverall risk of bias
Education
[8]Martins et al., Portugal, 2017RCTNoCDSSControl groupSix months pre-intervention, seven months intervention and eight months follow upCA 19-9; cholesterol; glucose; hepatitis B surface antigen; Hepatitis C antibody; VDRLNot reported−0.02Low
[9]Beijer et al., The Netherlands, 2016CBAYesAuditControl groupSix months pre-intervention and six months interventionFerritin; folate; hemoglobin; iron; LDH; leukocytes; reticulocyte; thrombocytes; transferrin; vitamin B12Not reported−0.09Medium
[10]van Wijk et al., The Netherlands, 2001RCTNoCDSS integrated with guidelinesControl groupThree months pre-intervention and one year interventionLaboratory testsNot reported−0.2Medium
[11]Verstappen et al., The Netherlands, 2003RCTNoGuidelines, audit and feedback reportControl groupSix months pre-intervention, six months intervention and six months follow upBUN; AST; LDH; amylase; alkaline phosphatase; bilirubin;Not reported−0.12Low
[11]Verstappen et al., The Netherlands, 2003RCTNoGuidelines, audit and feedback reportControl groupSix months pre-intervention, six months intervention and six months follow upIgE; leukocyte countNot reported−0.08Low
[12]Thomas et al., UK, 2006CRCTNoCDSS and feedback reportControl groupOne year pre-intervention and one year interventionAuto antibody screen; Ca-125; CEA; ferritin; FSH; Helicobacter pylori serum; IgE; TSH; vitamin B 12Not reported−0.31Low
[12]Thomas et al., UK, 2006CRCTNofeedback reportControl groupOne year pre-intervention and one year interventionAuto antibody screen; Ca-125; CEA; ferritin; FSH; Helicobacter pylori serum; IgE; TSH; vitamin B 12Not reported−0.15Low
[12]Thomas et al., UK, 2006CRCTNoCDSSControl groupOne year pre-intervention and one year interventionAuto antibody screen; Ca-125; CEA; ferritin; FSH; Helicobacter pylori serum; IgE; TSH; Vitamin B 12Not reported−0.14Low
[13]Larsson et al., Sweden, 1999CBANoEducation programControl groupEight months pre-intervention, three months intervention and six months follow upALT/AST; bilirubin/alkaline phosphatase; Cholesterol/HDL; Cholesterol/total number; sodium/potassium; fT3/TSH;fT4/TSHSEK 400,000 saved for assays that were recommended to decrease−0.34Medium
[14]Baker et al., UK, 2003RCTNoGuidelines and feedback reportControl groupOne year interventionLipids; viscosity; thyroid function; rheumatoid factor; urine cultureNot reported0Medium
[15]Tomlin et al., New Zealand, 2011CBANoGuidelines and feedback reportControl groupTwo years pre-intervention and two years interventionCRP; ESR; TSH; FT4; FT3; Culture; Giarda and Cryptosporidium; Ova and parasitesSaving in net expenditure 25.1%−0.212High
[16]Horn et al., USA, 2013ITSyesCost displaysControl groupOne year pre-intervention and six months intervention25-OH Vitamin D; ɒ fetoprotein; ALT); BMP; BUN; BNP; CBC with differential; Chlamydia/GC genital screening; Chlamydia/GC urine; creatinine; CMP (screen); electrolytes; ESR; ferritin; glucose; group a beta hemolytic streptococcus; HbA1C; iron binding profile; lipid profile; micro albumin (urine);Pap test; PTH; PSA; TSH; tissue transglutaminase; urine culture; urine analysisThe average cost saving was $4,545 per 1,000 visits per month.−0.01Medium
Administrative
[17]Salinas et al., Spain, 2014CBAYesRemoving tests from profileControl testTwo years pre-intervention and two years interventionAST; GGT; phosphate€34,064 saved post-intervention (the savings were calculated using the cost of extra test reagents)−0.51Medium
[8]Martins et al., Portugal, 2017RCTNoChanging the shortcut menuControl groupSix months pre-intervention, seven months intervention and eight months follow upESR; SPE; uric acid;Not reported−0.53Low
[18]Salinas et al., Spain, 2014CBAyesRemoving the test from the profilesControl testOne year pre-intervention and one year interventionUric acid€ 8,190 saved that resulted from the decrease in the request of UA in the post-intervention period as compared to the pre-intervention period.−0.7Medium
[19]Pellicer et al., Spain, 2018CBANoChanging profilesControl groupSix years pre-intervention and six months interventionTPOab; tgabNot reported−0.65Medium
[20]Zaat et al., The Netherlands, 1992CBANoReducing the number of tests in the formControl groupOne yearFrom 178 tests to 15 testsNot reported−0.18Medium
[21]Vardy et al., Israel, 2005BAYesChanging formTime periodFour months pre-intervention and four months interventionLaboratory routine testsNot reported−0.02High
[22]Bailey et al., UK, 2005BANoChanging formTime periodFive years interventionCalcium, LDH; rheumatoid factors; CRPNot reported−0.51High
[23]Kahan et al., Israel, 2009CBANoChanging formControl testFive months pre-intervention, five months interventionVitamin B 12, folic acid, ferritinNot reported−0.4High
[24]Shalev et al., Israel, 2009BANoRemoving tests from the formTime periodOne year pre-intervention and two years interventionTESTS DELETED: AST; GGT; total bilirubin; uric acid; Phosphorus; CK; PTT; CMV; EBV; antinuclear-ab; Hepatitis B-ab; feces; FSH; Beta-HCG quantity; LH; Hepatitis B core antibodies; E2; Prolactin; progesterone, testosterone; rubella antibodies IgG; cortisol; hepatitis a IgM; Sputum culture; puss culture; vagina culture; TESTS UNCHANGED IN THE ORDER FORM: ALT; creatinine; CBC; glucose; cholesterol; triglyceride; urea bun; HDL; LDL; alkaline phosphatase; Potassium; Sodium; urine general; TSH; Calcium; Urine culture; albumn; total protein; ESR; PT; HbA1c; throat culture; stool culture; beta-HCG qualityThe costs were marginally higher after the intervention period−0.38High
Administrative + Education
[25]Seppanen, et al., Finland, 2016CBANoRemoving test from the order form + auditControl testFour years pre-intervention and four years interventionAST; ESRNot reported−0.9Medium
[26]Elnenaei et al., Canada, 2016BANoUnbundling panel + multifaced educational approchTime periodThree monthsChloride, CO2$42,500 for chloride and $48,000 for CO2 in annual marginal cost savings.−0.72High
[27]Baricchi et al., Italy, 2012CBAYesNew profiles and auditControl groupOne year pre-intervention and one year interventionLaboratory profiles: Normal adult profile; patient with myeloma; Monoclonal gammopathy; active chronic hepatitis; thyroid; hypertension; estrogen–progestogenic treatmentNot reported−0.05High
Soft blocking
[28]Camerotto et al., Italy, 2012CBAYesInterrupted pop-up alertControl groupSven months intervention110 laboratory testsNot reported−0.26Medium
[28]Camerotto et al., Italy, 2012CBAYESInterrupted pop-up alertControl groupSix months intervention110 laboratory testsNot reported−0.38Medium
Hard blocking
[19]Pellicer et al., Spain, 2018CBANoGate keepingControl groupSix years pre-intervention and six months interventionTPOab; tgabNot reported−0.13Medium
[17]Salinas et al., Spain, 2014CBAYesReflexive testControl testTwo years pre-intervention and two years interventionIgA anti-gliadin antibody; iron; transferrin; total bilirubin€34,064 saved post-intervention (the saving were calculated using the cost of extra tests reagents)−0.15Medium
Hard blocking + education
[26]Elnenaei et al., Canada, 2016BANoGatekeeping + multifaced educational approachNot reportedOne year pre-intervention and one year intervention25-OH Vitamin D; CEA; FOB; folate; LDH; SPE$52,298 annual marginal cost savings−0.51High
[29]Salinas et al., Spain, 2015CBANoGate keeping and comment displayControl testSix years pre-intervention and four years intervention1,25-(OH)2 vitamin DThe strategies resulted in saving €1,200 (no specified which costs)−0.514Medium
[30]Caldarelli et al., Italy, 2017CBANoReflexive test + information letter to GPsTime periodOne year pre-intervention and two years interventionTSH; FT4; FT3€11,079 saved after intervention (calculated considering only the reagents- cost of a single test was obtained by dividing the cost of the kit by the number of the test that can be done by 1 kit)−0.10Medium

Table 2 includes studies that reported interventions applied for increasing the use of tests. One study investigated exclusively interventions aiming at increasing the use of (under-requested) tests [31] while three studies [8], [13], [24] reported, as a part of their investigation, both results related to over-use of tests as well as to under-used tests and therefore those results have been described separately in Table 2.

Table 2:

Studies characteristics of the references reporting intervention applied for the under-use of tests.

ReferenceAuthor-Country-yearDesignIntervention designed in collaboration with requestorInterventionComparatorDuration periodTestsCost evaluationRelative increase of targeted testsOverall risk of bias
Education
[13]Larsson et al., Sweden, 1999CBANoEducational programControl groupEight months pre-intervention, three months intervention and six months follow upCalcium/total number; methylal/total number; ferritin/total number; TSH/total number; HbA1c/total number; triglycerides/cholesterol; U-albumin/total numberSEK 140,000 cost spent for assays that were recommended to increase0.013Medium
Administrative
[8]Martins et al., Portugal, 2017RCTNoAdding testsControl groupSix months pre-intervention, seven months intervention and eight months follow upFOB; HDL; triglyceridesNot reported0.08low
[24]Shalev, Israel, 2009BANoAdding testsTime periodOne year pre-intervention and two years interventionOccult blood; bilirubin (neonatal)The costs were marginally higher after the reference period0.94High
Hard blocking
[31]Salinas et al., Spain, 2019BAYesReflexive testNo comparatorSix monthsAlbumin to creatinine ratio (ACR)The costs of strip analysis was 275.8€ but the saving in albumin reagent was 1450.3€0.079High

The studies included in this review were conducted in UK [12], [14], [22], Italy [27], [28], [30], The Netherlands [9], [10], [11], [20], Canada [26], USA [16], Israel [21], [23], [24], Sweden [13], Portugal [8], Spain [17], [18], [19], [29], [31], Finland [25], and New Zealand [15]. Although most were published in English, one was in Italian and another in Dutch.

The observational period in the 24 studies varied from three months to five years.

Study design and risk of bias

For 23 of the studies, the risk of bias was determined by following the criteria for RCT and non-RCT(6). Seven studies had a high overall risk of bias, of which five were BA [21], [22], [24], [26], [31], and three CBA [15], [23], [27]. Thirteen were classified as having a medium risk of bias, of which 11 were CBA [9], [13], [17], [18], [19], [20], [25], [28], [29], [30], [31], and two were RCT [10], [14]. Only three studies were classified as having a low risk of bias, one was a CRCT [12] and two were RCT [8], [11].

To determine the quality of the Interrupted time series (ITS) study, the ITS studies criteria was followed [6], and the study [16] was evaluated as having a medium risk of bias.

Of the five RCT/CRCT studies, randomization was performed at the clinics level in three of the studies [10], [12], [14], at the server level in one [8] (which means that all physicians using the same server received the same intervention) and at the primary care physicians’ level in the last one [11].

Of the 12 non-RCT studies, five used a test as control [17], [18], [23], [25], [29], and seven used a control group [9], [13], [15], [19], [20], [27], [28]. Five studies used a time period as comparator [21], [22], [24], [26], [30], while one prospective study did not have neither control group/test nor control period [31]. The ITS study [16] used a control group as comparator.

Intervention components

The majority of the studies (66%) used a single intervention approach, while the remaining studies used a combination of interventions.

Results obtained by different types of strategies combined with the risk of bias are displayed graphically in Figures 2 and 3. It is clearly illustrated that the risk of bias is lower for studies investigating the educational strategies, showing relative changes of approximately 10–30% reduction in test numbers. The medium risk of bias studies were represented in all categories and showed results ranging from approximately +15% to −90%. The high risk of bias studies were mainly included in the administrative category and the range of results were very broad going from +94% to −72%.

Figure 2: Thirty-one results extracted from 23 studies that report the effects of interventions with the purpose to reduce the number of tests requested.Some of the results were extracted from the same reference.
Figure 2:

Thirty-one results extracted from 23 studies that report the effects of interventions with the purpose to reduce the number of tests requested.

Some of the results were extracted from the same reference.

Figure 3: Four results extracted from four studies that report the effects of interventions with the purpose to increase the number of tests requested.Some of the results were extracted from references that also reported interventions with the aim to decrease the number of specific tests.
Figure 3:

Four results extracted from four studies that report the effects of interventions with the purpose to increase the number of tests requested.

Some of the results were extracted from references that also reported interventions with the aim to decrease the number of specific tests.

Studies published before 2010 reported only educational strategies (half of them supplemented by feedback reports) or administrative changes, while studies published later than 2010 implemented more technically complicated interventions (Table 3).

Table 3:

Distribution of interventions applied before and after 2010.

StrategyNumber of interventions
≤2010
Administrative6
Education4
Education + feedback report5
>2010
Administrative5
Administrative + education3
Education3
Education + feedback report1
Hard blocking3
Hard blocking + education3
Soft blocking2

Impact of interventions aiming to decrease the number of tests

Educational strategies

Education

The studies examining interventions based exclusively on an educational component were generally medium or low risk of bias, except from one study with a high risk of bias. Duration period of the applied interventions ranged from three months to two years.

Studies which applied traditional educational interventions such as audit or distributed guidelines reported results ranging from −0.08 to −0.34 [9], [11], [13], [15]. The overall conclusions of the audit and guidelines interventions, were generally positive, with the highest reduction in number of tests being achieved by applying an educational program, consisting of a two days lecture series with distributions of the references used during the lectures and a follow-up meeting six months after the lectures [13].

Three studies which integrated guidelines in the CDSS resulted in a reduction ranging from −0.02 to −0.31, where the two RCT low risk of bias studies achieved reductions of −0.02 and −0.31 [8], [12].

Economical outcome for educational interventions were evaluated by two studies. Tomlin et al. [15] reported the costs in terms of savings in net expenditures (up to 25%), while Larsson et al. [13] reported the direct laboratory costs saved (SEK 400.000).

Costs displayed

The study by Horn et al. [16] was the only one to examine the effect of costs display as a “stand alone” intervention, showing no effect on number of tests requested (−0.01).

Feedback reports

Feedback reports were predominantly combined with educational interventions [11], [15] and only two studies evaluated also the effect of the feedback reports by itself. Thomas et al. [12] assessed the feedback reports both alone (−0.15) and combined with guidelines (−0.31), while Baker et al. [14] found that feedback reports initially combined with distribution of guidelines did not have any influence on tests ordered by GPs.

CPOE component

Administrative changes

Nine studies applied administrative changes exclusively, including removing or changing tests in profiles or changing tests in the requisition form or shortcut menu [8], [], while three studies combined these interventions with educational components [25], [26], [27].

The duration periods ranged from three months to five years.

The risk of bias for these studies were predominantly high [21], [22], [23], [24] or medium [17], [18], [19], [20], except one study with low risk of bias [8], and the results showed a very broad range (from −0.02 to −0.9).

Studies that removed tests from profiles were all classified as medium risk of bias achieving comparable reductions- (−0.51; −0.64 and −0.70, respectively) [17], [18], [19], while making modifications in the requisition order form resulted in widely different results, ranging from −0.02 to −0.53 [8], [20], [21], [22], [23], [24]. The quality of these papers were very heterogeneous as well.

In three studies, two with high risk of bias [26], [27] and one with medium risk of bias [25], the administrative changes were combined with multifaceted educational approaches (such as training sessions, distribution of guidelines, and reminders) [26] or audits [25], [27], achieving very different reductions, −0.72; −0.05; and −0.9, respectively.

Four studies reported the expenditures associated with the administrative approaches, described as either marginal costs of tests or by differences in costs for test reagents. Reports of marginal costs showed savings between 42.500 and 48.000$ in one study [26] and marginally higher costs after the intervention period in another [24]. Differences in costs for test reagents showed savings in both studies, with savings of 34.064€ and 8.190€, respectively [17], [18].

Soft-blocking

One medium risk of bias study used the soft blocking approach [28]. It consisted of a pop-up alert that proposed indications for the use of the included tests. GPs were free to override the alert, but with the obligation to justify the order. This approach was applied to two groups of GPs attached to two different hospitals laboratories. The strategy was considered successful as the number of requests reduced consistently in both groups compared to the control group (−0.26 and −0.38, respectively).

Hard-blocking

Six studies (four of medium risk of bias and two high risk of bias), used more rigid approaches either by using gate keeping interventions that rejects tests from the clinicians orders [19], [26], [29] or by applying reflex testing algorithms [17], [30], [31].

In three of the studies, hard blocking was combined with education [26], [29], [30]. The studies that used gate keeping combined with multifaceted educational approaches (such as audit or letters which include educational components sent to the highest requesting GPs and memorandums) [26] or educational messages displayed to the GPs requisition system [29] achieved larger effects compared to gate keeping alone [19] (−0.51 and −0.51 vs. −0.13). In contrast, results obtained by reflex testing seemed to be independent of combination with educational interventions [30] or not [17] (−0.10 and −0.15, respectively).

In four studies, also costs information was reported, as either the cost of reagents [17], [29], [30] or as marginal costs of the test [26]. The reagent costs saved after reflex testing was applied, were 11,079€ [30] and 34,064€ [17] while, using a gate keeping approach, saved 1,200€ in reagents costs [29] and 52.298€ in marginal costs [26].

Impact of interventions aiming to increase the number of tests

Three studies were designed to apply interventions to increase the use of specific laboratory tests in addition to the strategies to decrease the use of other tests [8], [13], [24]. Only one study was designed to apply exclusively an intervention to increase the use of the evaluated test [31].

Results of interventions aiming to increase the number of tests are reported in Figure 3.

Education

One medium risk of bias study [13] applied an educational method to recommend the use of specific tests, however, the impact of the strategy achieved modest effect (0.08).

CPOE component

Administrative changes

Two studies examined also the effect of adding specific tests to the prescription format. They found that the number of tests increased modestly in the low risk of bias study [8] (0.08) while the increase achieved was very high (0.94) in the study, which had a high risk of bias [24].

Algorithm (Hard-blocking)

One high risk of bias study [31] evaluated the effect of adding albumin to the creatinine ratio (ACR) automatically in patients with hypertension if the test had not been requested in the previous year, according to the guideline. Authors reported that the algorithm automatically added +0.13 tests, which otherwise would have been missed.

Studies designed together with requestors

Eight studies designed the intervention in consensus with the requestors [9], [16], [17], [18], [21], [27], [28], [31] and they are represented in all the intervention categories. The span of the results were very broad, ranging from +0.07 to −0.51. Moreover, none of the studies had a follow-up period.

Discussion

We here present a systematic review of investigations of different interventions applied in primary care in order to optimize the use of laboratory tests. This review was designed to adjust the results of the interventions to make them comparable in each category. We summarized 24 studies, which reported 35 results of interventions, evaluating the effect of different strategies based on educational components, feedback reports or CPOE components or combinations of them to optimize test utilization.

We only found eight intervention results being evaluated as a high quality study due to a low risk of bias. While many studies are of lower quality being classified as medium or even high risk of bias, making results less valid.

Sustainability/applicability

High quality studies were primarily represented in papers evaluating educational components, and they often resulted in changes in test usage of 10–30%. The duration periods were however, generally short with most of the studies reporting interventions of less than one year, and most of the reports lacked a follow-up period, so it is unknown whether the changes are stable or the requesting patterns return to previous levels after the intervention period, with one outstanding exception: The long-term effect of the study of Larsson et al. [13] was evaluated after eight years [32] and showed that short continuation courses on optimal use of clinical chemistry can achieve permanent changes in the test ordering patterns of GPs.

One of the key roles of the laboratory is to provide an easy and direct access to guidelines and educational material for examining and treating the patients in the best possible way [33]. However, educational approaches often require a large amount of effort over time as well as dedicated staff and still may not be sufficient on their own [1], [34], [35].

We found only one study which applied a strategy where the cost of the tests are displayed in real time when the GP is ordering the tests [16], and it achieved insignificant results. In the hospital setting, this strategy has resulted in conflicting results, with some studies reporting that displaying the cost of the tests led to a decrease in the ordering of tests [36], [37] while other authors concluded the contrary [38]. Displaying costs on the GPs requisition form is still sparsely investigated, but GPs tend to use more common and cheaper tests compared to hospitals, which tend to request the majority of special and expensive tests. It is therefore likely that this intervention might have only minor effect on the requesting patterns of GPs.

Feedback reports on their own seem to have modest or no effect on requests made in primary care. This is in contrast to findings in the hospital setting, where the use of feedback reports (described in combination with educational components) have been found to have clear and prolonged effects on physician laboratory test-orders [39], [40]. However, as described by Pantojaet al. [41] the effect of feedback reports probably depend on a number of specific factors such as: How the reports are delivered to the GPs; whether they provide generic or specific information; whether they provide an explanation of their content or not; whether they require the healthcare professional to record a response or whether the targeted clinicians are involved in their development.

Many of the administrative interventions investigated alone or in combination with educational components were technically provided by the CPOE systems as changes of requesting profiles, new order profiles or unbundling panels.

Although administrative interventions achieved the greatest relative reductions, the range of the results differed widely, probably due to the heterogeneity of the studies and the high risk of bias of many of the studies, which makes the results uncertain. It was reported by the authors of the studies included in this review, that this kind of interventions needed relatively little effort to carry out and were easily maintained over time [17], [26] because changes in the IT-systems are permanent as well as independent of conditions in the individual clinics. However, as it is system-dependent, it may not be possible to apply everywhere.

The one study that used pop up alerts achieved a high reduction in number of tests requested [28]. In hospitals, this approach has been extensively used, both by applying non-interruptive alerts, which passively displays information, such as redundant testing or suggestions of corollary orders [28], [42], or interruptive alerts which require the user to acknowledge the alert [43], [44]. A theoretical benefit of these alerts is that they can provide “just in time” advice and thereby provide long-term education. Conversely, this type of interventions contain the risk of “alert fatigue” when the alert is displayed for long time periods, making the requester “blind” to the content of the alert and/or even irritated by the interruptions caused by the alert. This is probably most often seen, when alerts are used with tests that are requested in high numbers.

The use of reflexive tests or gate keeping rules have just recently been investigated among GPs (Table 3) both in regards to over-used and under-used tests and this intervention seem to achieve positive effects. The use of gate keeping whether it was combined or not with educational components obtained comparable results in the included studies, but we did not find a clear effect when reflexive testing was combined with education. Nevertheless, these advanced tools are described by the authors as interventions which are more suited when implementing clinical guidelines [19]. This is because laboratory professionals are in the position to manage demand on laboratory tests by utilizing evidence based guidelines, when developing specific test ordering directives and gate keeping rules [26].

According to the review of Cadamuro et al. [33], laboratory diagnostic algorithms are the most sophisticated tools to influence the appropriate use of diagnostic testing as they permit physicians to obtain relevant laboratory results without ordering individual tests and still arrive at a definitive diagnosis. It has been stated that this approach together with a comprehensive, patient-specific interpretation of the test results, can substantially reduce the cost by decreasing the time to diagnosis and the number of tests ordered without compromising patient care [39], [40], [41]. CPOE approaches are still only sparsely investigated in the primary care setting, however the studies included in this review are mainly medium risk of bias, suggesting that additional low risk of bias investigations are needed in the future.

In a number of studies, interventions were designed in collaboration and applied in consensus with the requestors [9], [10], [12], [16], [17], [18], [21], [27], [28], [31]. In some of this [16], [20], [28] the opinion of the GPs as to the applied interventions was investigated by questionnaires or interviews. However, none of the studies reported in this review have evaluated how the GPs view the impact of the different interventions on their daily practice. GPs are exposed to the intervention every day and, in the perspective of improving the laboratory test requests, having their opinion is essential. If the strategy applied is considered annoying and/or delaying the workflow by the GPs, the applied strategy should be re-evaluated. The tendency of applying strategies in close collaboration with the requestors seems to be an effective method to share information as well as potential problems that otherwise may have been overlooked, but based on the present results, we cannot substantiate this. Nevertheless, we think that a mutual discussion may bring better results or at least better implementation, and further studies in this direction is recommended.

Included in the limitations reported, there were short follow-up periods and poor study designs. However, the most commonly mentioned limitation was regarding the effect of the interventions on the patients. In all intervention studies, it was stated that it was unknown whether the reduced number of tests after intervention might have led to a missed diagnosis in patients. Evaluating appropriateness of test requests both before and after applying an intervention is a difficult task, especially when the clinical records of the patients are not integrated in the laboratory system. But, if the intervention is well designed and the appropriate use of the test under investigation is well documented and subject to clear clinical guidelines, changes in number of tests requested ought not to affect patient care.

Costs

The studies which reported changes in costs either as marginal costs or costs of reagents did not evaluate the cost-benefit of the interventions applied. Reporting only the reduction in costs without including any additional costs to the laboratory that might be necessary in regards to setting up and maintaining the intervention or informing the requestors, does not give a clear picture of the total expense involved. What could be done was to summarize the effect of how many blood test taken would potentially be taken if the intervention was not applied and consequently evaluate whether it was worthwhile to apply that intervention.

Limitations

Our review have some limitations. First of all, our search was only applied in two databases so potentially important studies included in other databases could be missed. In addition, the search terms did not include ‘demand management’ and it cannot be ruled out that some studies may have been missed on that account. Furthermore, because of the heterogeneity of the calculation of results in the different studies, we were not able to make a traditional meta-analysis, so our study is mainly descriptive. However, by adjusting the results, we were able to make the included studies comparable as to the relative changes in test numbers.

Conclusions and future research direction

This review gives an overview of the strategies recently applied in primary care to optimize use of laboratory tests as this setting is less well investigated compared to the hospital setting. We report that interventions including educational components consistently changed the number of tests and these results were supported by the good quality of the studies. Feedback reports have mainly been applied in combination with educational interventions, while when used alone the effect has been found to be minimal. The use of administrative changes, both alone and in combination with education seem to produce a marked change in the number of test requests, however the quality of those studies was generally considered to be of medium or high risk of bias, making the results less reliable. The use of soft and hard blocking were recently implemented in primary care, but the number of studies (mainly of medium risk of bias) was too low to extract clear conclusions.

The laboratory costs reported by some of the studies are supplementary information to the changes in number of tests and do not add further information to the analysis.

Future research is needed to evaluate the effectiveness of both soft and hard blocking as well as other approaches and relevant combinations of interventions that remain uninvestigated in primary care. For example, the use of feedback reports in combination with CPOE interventions. Furthermore, the minimum retesting intervals which have been extensively used in the hospital setting and where clear guidelines have been published [45] awaits proper investigation in primary care.

Designing and applying interventions in consensus with the GPs seems to be a good practice for sharing information between laboratory and primary care setting. However, as all the studies which used this method were lacking follow-up periods, it was not possible to say whether the applied interventions have resulted in long-lasting changes.

Moreover, it can be expected that collaboration on the optimization process will influence the GPs opinion of the interventions applied and the evaluation of appropriateness which should be included in future studies together with changes in number of tests.


Corresponding author: Serena Lillo, MSc, Biochemistry Department, Odense University Hospital (OUH) and Svendborg Hospital, Baagøes Alle 15, 5700Svendborg, Denmark, E-mail:

  1. Research funding: None declared.

  2. Author contributions: All authors have accepted responsibility for the entire content of this manuscript and approved its submission.

  3. Competing interests: Authors state no conflict of interest.

References

1. Yeh, DD. A clinician’s perspective on laboratory utilization management. Clin Chim Acta 2014;427:145–50. https://doi.org/10.1016/j.cca.2013.09.023.Search in Google Scholar

2. Huck, A, Lewandrowski, K. Utilization management in the clinical laboratory: an introduction and overview of the literature. Clin Chim Acta 2014;427:111–7. https://doi.org/10.1016/j.cca.2013.09.021.Search in Google Scholar

3. Cadogan, SL, Browne, JP, Bradley, CP, Cahill, MR. The effectiveness of interventions to improve laboratory requesting patterns among primary care physicians: a systematic review. Implement Sci 2015;10:167. https://doi.org/10.1186/s13012-015-0356-4.Search in Google Scholar

4. Thomas, RE, Vaska, M, Naugler, C, Turin, TC. Interventions at the laboratory level to reduce laboratory test ordering by family physicians: systematic review. Clin Biochem 2015;48:1358–65. https://doi.org/10.1016/j.clinbiochem.2015.09.014.Search in Google Scholar

5. https://www.covidence.org/home.Search in Google Scholar

6. https://epoc.cochrane.org/sites/epoc.cochrane.org/files/public/uploads/Resources-for-authors2017/suggested_risk_of_bias_criteria_for_epoc_reviews.pdf.Search in Google Scholar

7. https://cccrg.cochrane.org/author-resources.Search in Google Scholar

8. Martins, CMS, da Costa Teixeira, AS, de Azevedo, LFR, Sa, LMB, Santos, PAAP, do Couto, MLGD, et al.. The effect of a test ordering software intervention on the prescription of unnecessary laboratory tests - a randomized controlled trial. BMC Med Inf Decis Mak 2017;17:20. https://doi.org/10.1186/s12911-017-0416-6.Search in Google Scholar

9. Beijer, C, de Jong, AM, Statius Muller, I, Eysink Smeets, J, Pronk-Admiraal, C. Blood test requests after diagnostic consultation. Huisarts Wet 2016;59:434–8. https://doi.org/10.1007/s12445-016-0267-x.Search in Google Scholar

10. van Wijk, MA, van der Lei, J, Mosseveld, M, Bohnen, AM, van Bemmel, JH. Assessment of decision support for blood test ordering in primary care. a randomized trial. Ann Intern Med 2001;134:274–81. https://doi.org/10.7326/0003-4819-134-4-200102200-00010.Search in Google Scholar

11. Verstappen, WH, van der Weijden, T, Sijbrandij, J, Smeele, I, Hermsen, J, Grimshaw, J, et al.. Effect of a practice-based strategy on test ordering performance of primary care physicians: a randomized trial. JAMA 2003;289:2407–12. https://doi.org/10.1001/jama.289.18.2407.Search in Google Scholar

12. Thomas, RE, Croal, BL, Ramsay, C, Eccles, M, Grimshaw, J. Effect of enhanced feedback and brief educational reminder messages on laboratory test requesting in primary care: a cluster randomised trial. Lancet 2006;367:1990–6. https://doi.org/10.1016/s0140-6736(06)68888-0.Search in Google Scholar

13. Larsson, A, Blom, S, Wernroth, ML, Hulten, G, Tryding, N. Effects of an education programme to change clinical laboratory testing habits in primary care. Scand J Prim Health Care 1999;17:238–43.10.1080/028134399750002476Search in Google Scholar PubMed

14. Baker, R, Falconer Smith, J, Lambert, PC. Randomised controlled trial of the effectiveness of feedback in improving test ordering in general practice. Scand J Prim Health Care 2003;21:219–23. https://doi.org/10.1080/02813430310002995.Search in Google Scholar PubMed

15. Tomlin, A, Dovey, S, Gauld, R, Tilyard, M. Better use of primary care laboratory services following interventions to ‘market’ clinical guidelines in New Zealand: a controlled before-and-after study. BMJ Qual Saf 2011;20:282–90. https://doi.org/10.1136/bmjqs.2010.048124.Search in Google Scholar PubMed

16. Horn, DM, Koplan, KE, Senese, MD, Orav, EJ, Sequist, TD. The impact of cost displays on primary care physician laboratory test ordering. J Gen Intern Med 2014;29:708–14. https://doi.org/10.1007/s11606-013-2672-1.Search in Google Scholar PubMed PubMed Central

17. Salinas, M, Lopez-Garrigos, M, Asencio, A, Leiva-Salinas, M, Lugo, J, Leiva-Salinas, C. Laboratory utilization improvement through a computer-aided algorithm developed with general practitioners. Clin Chem Lab Med 2015;53:1391–7. https://doi.org/10.1515/cclm-2014-0762.Search in Google Scholar PubMed

18. Salinas, M, Lopez-Garrigos, M, Asencio, A, Batlle, E, Minguez, M, Lugo, J, et al.. Strategy to improve the request of uric acid in primary care: preliminary results and evaluation through process and outcome appropriateness indicators. Clin Biochem 2014;47:467–70. https://doi.org/10.1016/j.clinbiochem.2013.12.025.Search in Google Scholar PubMed

19. Pellicer, PS, Tamayo, RG, Lopez, VN. Reducing test request for anti-thyroglobulin and anti-thyroid peroxidase antibodies: trends before and after interventions based on rejection rules and profile management. Biochem Med 2018;28:030709. https://doi.org/10.11613/bm.2018.030709.Search in Google Scholar PubMed PubMed Central

20. Zaat, JO, van Eijk, JT, Bonte, HA. Laboratory test form design influences test ordering by general practitioners in The Netherlands. Med Care 1992;30:189–98. https://doi.org/10.1097/00005650-199203000-00001.Search in Google Scholar PubMed

21. Vardy, DA, Simon, T, Limoni, Y, Kuperman, O, Rabzon, I, Cohen, A, et al.. The impact of structured laboratory routines in computerized medical records in a primary care service setting. J Med Syst 2005;29:619–26. https://doi.org/10.1007/s10916-005-6130-4.Search in Google Scholar PubMed

22. Bailey, J, Jennings, A, Parapia, L. Change of pathology request forms can reduce unwanted requests and tests. J Clin Pathol 2005;58:853–5. https://doi.org/10.1136/jcp.2004.023101.Search in Google Scholar PubMed PubMed Central

23. Kahan, NR, Waitman, DA, Vardy, DA. Curtailing laboratory test ordering in a managed care setting through redesign of a computerized order form. Am J Manag Care 2009;15:173–6.10.1016/S1098-3015(10)73755-4Search in Google Scholar

24. Shalev, V, Chodick, G, Heymann, AD. Format change of a laboratory test order form affects physician behavior. Int J Med Inform 2009;78:639–44. https://doi.org/10.1016/j.ijmedinf.2009.04.011.Search in Google Scholar PubMed

25. Seppanen, K, Kauppila, T, Pitkala, K, Kautiainen, H, Puustinen, R, Iivanainen, A, et al.. Altering a computerized laboratory test order form rationalizes ordering of laboratory tests in primary care physicians. Int J Med Inform 2016;86:49–53. https://doi.org/10.1016/j.ijmedinf.2015.11.013.Search in Google Scholar PubMed

26. Elnenaei, MO, Campbell, SG, Thoni, AJ, Lou, A, Crocker, BD, Nassar, BA. An effective utilization management strategy by dual approach of influencing physician ordering and gate keeping. Clin Biochem 2016;49:208–12. https://doi.org/10.1016/j.clinbiochem.2015.11.005.Search in Google Scholar PubMed

27. Baricchi, R, Zini, M, Nibali, MG, Vezzosi, W, Insegnante, V, Manfuso, C, et al.. Using pathology-specific laboratory profiles in clinical pathology to reduce inappropriate test requesting: two completed audit cycles. BMC Health Serv Res 2012;12:187. https://doi.org/10.1186/1472-6963-12-187.Search in Google Scholar PubMed PubMed Central

28. Camerotto, A, Pozzato, A, Truppo, V, Bedendo, S, Angiolelli, G, Lucchiari, A, et al.. The software GOELM as an innovative way of knowledge management and prescription of laboratory tests: the experience of two Local Healthcare Units (ULSS). Biochim Clin 2012;36:98–106.Search in Google Scholar

29. Salinas, M, Lopez-Garrigos, M, Flores, E, Leiva-Salinas, M, Ahumada, M, Leiva-Salinas, C. Education and communication is the key for the successful management of vitamin D test requesting. Biochem Med 2015;25:237–41. https://doi.org/10.11613/bm.2015.024.Search in Google Scholar PubMed PubMed Central

30. Caldarelli, G, Troiano, G, Rosadini, D, Nante, N. Adoption of TSH Reflex algorithm in an Italian clinical laboratory. Ann Ig 2017;29:317–22.Search in Google Scholar

31. Salinas, M, Lopez-Garrigos, M, Flores, E, Ahumada, M, Leiva-Salinas, C. Laboratory intervention to improve the request of urinary albumin in primary care patients with arterial hypertension and financial implications. Clin Biochem 2019;69:48–51. https://doi.org/10.1016/j.clinbiochem.2019.04.012.Search in Google Scholar PubMed

32. Mindemark, M, Larsson, A. Long-term effects of an education programme on the optimal use of clinical chemistry testing in primary health care. Scand J Clin Lab Invest 2009;69:481–6. https://doi.org/10.1080/00365510902749123.Search in Google Scholar PubMed

33. Cadamuro, J, Ibarz, M, Cornes, M, Nybo, M, Haschke-Becher, E, von Meyer, A, et al.. Managing inappropriate utilization of laboratory resources. Diagnosis (Berl) 2019;6:5–13.10.1515/dx-2018-0029Search in Google Scholar PubMed

34. Rubinstein, M, Hirsch, R, Bandyopadhyay, K, Madison, B, Taylor, T, Ranne, A, et al.. Effectiveness of practices to support appropriate laboratory test utilization: a laboratory medicine best practices systematic review and meta-analysis. Am J Clin Pathol 2018;149:197–221. https://doi.org/10.1093/ajcp/aqx147.Search in Google Scholar PubMed PubMed Central

35. Kobewka, DM, Ronksley, PE, McKay, JA, Forster, AJ, Van Walraven, C. Influence of educational, audit and feedback, system based, and incentive and penalty interventions to reduce laboratory test utilization: a systematic review. Clin Chem Lab Med 2015;53:157–83. https://doi.org/10.1515/cclm-2014-0778.Search in Google Scholar PubMed

36. Feldman, LS, Shihab, HM, Thiemann, D, Yeh, H-C, Ardolino, M, Mandell, S, et al.. Impact of providing fee data on laboratory test ordering: a controlled clinical trial. JAMA Intern. Med 2013;173:903–8. https://doi.org/10.1001/jamainternmed.2013.232.Search in Google Scholar PubMed

37. Fang, DZ, Sran, G, Gessner, D, Loftus, PD, Folkins, A, Christopher, JY3rd, et al.. Cost and turn-around time display decreases inpatient ordering of reference laboratory tests: a time series. BMJ Qual Saf 2014;23:994–1000. https://doi.org/10.1136/bmjqs-2014-003053.Search in Google Scholar PubMed

38. Sedrak, MS, Myers, JS, Small, DS, Nachamkin, I, Ziemba, JB, Murray, D, et al.. Effect of a price transparency intervention in the electronic health record on clinician ordering of inpatient laboratory tests: the PRICE randomized clinical trial. JAMA Intern Med 2017;177:939–45. https://doi.org/10.1001/jamainternmed.2017.1144.Search in Google Scholar PubMed PubMed Central

39. Erlingsdottir, H, Johannesson, A, Asgeirsdottir, TL. Can physician laboratory-test requests be influenced by interventions? Scand J Clin Lab Invest 2015;75:18–26. https://doi.org/10.3109/00365513.2014.965734.Search in Google Scholar PubMed

40. Giordano, D, Zasa, M, Iaccarino, C, Vincenti, V, Dascola, I, Brevi, BC, et al.. Improving laboratory test ordering can reduce costs in surgical wards. Acta Biomed 2015;86:32–7.Search in Google Scholar

41. Pantoja, T, Grimshaw, JM, Colomer, N, Castañon, C, Leniz Martelli, J. Manually-generated reminders delivered on paper: effects on professional practice and patient outcomes. Cochrane Database Syst Rev 2019;12:Cd001174.10.1002/14651858.CD001174.pub4Search in Google Scholar PubMed PubMed Central

42. Eaton, KP, Chida, N, Apfel, A, Feldman, L, Greenbaum, A, Tuddenham, S, et al.. Impact of nonintrusive clinical decision support systems on laboratory test utilization in a large academic centre. J Eval Clin Pract 2018.10.1111/jep.12890Search in Google Scholar PubMed PubMed Central

43. Levick, DL, Stern, G, Meyerhoefer, CD, Levick, A, Pucklavage, D. Reducing unnecessary testing in a CPOE system through implementation of a targeted CDS intervention. BMC Med Inf Decis Making 2013;13:43. https://doi.org/10.1186/1472-6947-13-43.Search in Google Scholar PubMed PubMed Central

44. Hinson, JS, Mistry, B, Hsieh, Y-H, Risko, N, Scordino, D, Paziana, K, et al.. Using the electronic medical record to reduce unnecessary ordering of coagulation studies for patients with chest pain. West J Emerg Med 2017;18:267–9. https://doi.org/10.5811/westjem.2016.12.31927.Search in Google Scholar PubMed PubMed Central

45. Clinical Practice Group of the Association for Clinical Biochemistry and Laboratory Medicine, supported by the Royal College of Pathologists. National Minimum Re-testing Interval Project: a final report detailing consensus recommendations for minimum re-testing intervals for use in Clinical Biochemistry. Available from: https://www.acb.org.uk/docs/default-source/guidelines/acb-mri-recommendations-a4-computer.pdf [accessed July 2013].Search in Google Scholar


Supplementary Material

The online version of this article offers supplementary material (https://doi.org/10.1515/cclm-2020-1734).


Received: 2020-11-20
Accepted: 2021-01-26
Published Online: 2021-02-10
Published in Print: 2021-07-27

© 2021 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 25.4.2024 from https://www.degruyter.com/document/doi/10.1515/cclm-2020-1734/html
Scroll to top button