Skip to content
BY 4.0 license Open Access Published by De Gruyter October 31, 2022

Performance of digital morphology analyzer CellaVision DC-1

  • Gun-Hyuk Lee ORCID logo , Sumi Yoon ORCID logo , Minjeong Nam ORCID logo , Hanah Kim ORCID logo and Mina Hur ORCID logo EMAIL logo

Abstract

Objectives

CellaVision DC-1 (DC-1, Sysmex, Kobe, Japan) is a newly launched digital morphology analyzer that was developed mainly for small to medium-volume laboratories. We evaluated the precision, qualitative performance, comparison of cell counts between DC-1 and manual counting, and turnaround time (TAT) of DC-1.

Methods

Using five peripheral blood smear (PBS) slides spanning normal white blood cell (WBC) range, precision and qualitative performance of DC-1 were evaluated according to the Clinical and Laboratory Standards Institute (CLSI) EP15-A3, EP15-Ed3-IG1, and EP12-A2 guidelines. Cell counts of DC-1 and manual counting were compared according to the CLSI EP 09C-ED3 guidelines, and TAT of DC-1 was also compared with TAT of manual counting.

Results

DC-1 showed excellent precision (%CV, 0.0–3.5%), high specificity (98.9–100.0%), and high negative predictive value (98.4–100.0%) in 18 cell classes (12 WBC classes and six non-WBC classes). However, DC-1 showed 0% of positive predictive value in seven cell classes (metamyelocytes, myelocytes, promyelocytes, blasts, plasma cells, nucleated red blood cells, and unidentified). The largest absolute mean differences (%) of DC-1 vs. manual counting was 2.74. Total TAT (min:s) was comparable between DC-1 (8:55) and manual counting (8:55).

Conclusions

This is the first study that comprehensively evaluated the performance of DC-1 including its TAT. DC-1 has a reliable performance that can be used in small to medium-volume laboratories for assisting PBS review. However, DC-1 may make unnecessary workload for cell verification in some cell classes.

Introduction

Microscopic examination of peripheral blood smear (PBS) is essential in clinical hematology laboratories [1]. Thorough PBS review performed by hematology experts is important because it improves detection of underpopulated cells, and analysis of cell morphology on PBS is fundamental for diagnosis of hematologic diseases [1], [2], [3]. Manual counting is the gold standard for white blood cell (WBC) differential according to the International Council for Standardization in Hematology (ICSH) guidelines [4]. Manual counting is, however, inefficient because the process is technically demanding and labor-intensive resulting in long turnaround time (TAT), and the results may be subjective with inter-observer variation [5, 6].

Digital morphology (DM) analyzers can provide analysis of cell morphology (preclassification) with reduced TAT and inter-observer variation. In one study, DM analyzers showed advantages over manual counting in laboratory efficiency including shortened TAT [7]. Several DM analyzers have been launched since the 1970s, including the Cellavision DM-96 (CellaVision AB, Lund, Sweden), Sysmex DI-60 (DI-60, Sysmex, Kobe, Japan), and Vision Pro (West Medica, Perchtoldsdorf, Austria) [8], [9], [10], [11]. Those DM analyzers can be used mainly in large-volume laboratories, and they are too large and expensive to be used in small to medium-volume laboratories [11]. Small and medium-volume laboratories are defined as performing not more than 2,000 tests/year and 100,000 tests/year, respectively [12].

Recently, CellaVision DC-1 (DC1, Sysmex, Kobe, Japan) was newly launched. It is a compact-sized and easy-to-handle device compared with other DM analyzers. Hematology experts can verify the preclassified results remotely through DC-1 remote review software, which make it possible to provide telemedicine. Accordingly, DC-1 may be useful in small to medium-volume laboratories that may have trouble with lack of hematology experts.

Recent ICSH review recommends that preclassified results on DM analyzers should be verified by hematology experts especially in patients with having suspicious pathological cells [2]. All DM analyzers should use high optical magnification (×1,000) for standardization [26, 13], and external quality control might be desirable when using DM analyzers in clinical laboratories [14]. The ICSH guidelines recommend evaluating all performance characteristics when a new DM analyzer is launched [4]. To the best of our knowledge, there has been only one study that evaluated the performance of DC-1 [15]. In the previous study, only DC-1 post-classification (verification) was compared with the manual counting, and the comparison was confined to the six major cell classes (five differentials and blasts). Accordingly, there has been no comparison encompassing DC-1 pre-classification, user verification, and manual counting. Moreover, the analytical performance of DC-1 for each of 18 cell classes was not explored, although its clinical sensitivity and specificity were reported only for the six cell classes. Accordingly, there is a room for further evaluation on DC-1, including its TAT. In this study, we comprehensively evaluated DC-1 in terms of the precision, qualitative performance, comparison of cell counts between DC-1 and manual counting, and TAT on PBS slides spanning normal WBC range.

Materials and methods

Study samples

For this in vitro experimental study, five PBS slides were selected during routine PBS review in January 2021 at the Konkuk University Medical Center (KUMC), Seoul, Korea. These PBS slides were obtained from healthy individuals, spanning normal WBC range including mild leukopenia (2.0–4.0 × 109/L) and mild leukocytosis (10.0–15.0 × 109/L). WBC counts in these samples were 3.90 × 109/L, 5.19 × 109/L, 6.72 × 109/L, 8.72 × 109/L, and 10.78 × 109/L, respectively. Venous whole blood samples which were collected in k3-EDTA-containing vacutainer (Greiner Bio-One International GmbH, Frickenhausen, Germany) were used for complete blood counts in XN-9000 (Sysmex, Kobe, Japan), and PBS slides were made and reviewed for WBC differentials [1]. This study was conducted according to the Declaration of Helsinki, and the study protocol was approved by the Institution Review Board of KUMC (2022-02-050 in 02 March 2022). In this study, getting informed consent was exempt because additional blood sample collection or intervention were not required.

PBS review on DC-1 and manual counting

DC-1 uses an automated microscope, a high-quality digital camera, and artificial intelligence for preclassification of cells from stained PBS slides. The digital images can be stored in a database including WBC differential results. The RAL SmearBox and RAL StainBox with methanol-free reagents are combined with DC-1 for optimizing the PBS preparation. So, it can cover whole hematology workflow. In this study, SP-10 (Sysmex) were used for PBS preparation with RAL May-Grünwald Giemsa (RAL, RAL Diagnostics, Bordeaux, France) staining method rather than RAL SmearBox and RAL StainBox.

Preclassification on DC-1 includes total 18 cell classes (12 WBC classes and six non-WBC classes). The 12 WBC classes include blasts, promyelocytes, myelocytes, metamyelocytes, band neutrophils, segmented neutrophils, lymphocytes, monocytes, eosinophils, basophils, variant lymphocytes, and plasma cells. The six non-WBC classes include nucleated RBCs (nRBCs), smudge cells, artifact, giant platelet, platelet aggregation, and unidentified cells. Preclassified results are verified by hematology experts. According to the Clinical and Laboratory Standards Institute (CLSI) H20-A2 guidelines, band and segmented neutrophils are counted separately, and it still remains in Korean and Japanese laboratories. However, band and segmented neutrophils are not counted separately in Europe and the United States [5, 10, 16]. In the default setting, DC-1 does not separate band and segmented neutrophils; DC-1 can choose whether it would separate band neutrophils or not. In this study, band neutrophils were counted separately. DC-1 completes analysis when 210 cells are counted during the preclassification. Plasma cells and non-WBC cells are not included in the setting of 210 cell counting; accordingly, actual cell counts might have more than 210 cells/run including plasma cells and non-WBC cells. For TAT analysis, log data from DC-1 was used. The data includes TAT of scanning ideal zone, preclassification, and verification. TAT of preparing and changing PBS slides between run (inserting, removing, and oil immersion) were recorded manually.

According to the CLSI H20-A2 guidelines, manual counting was performed with two hematology experts counting 200 cells each, and the average values were used for evaluation [16]. Although Romanowsky-stained PBS slides are the gold standard for manual counting, RAL-stained PBS slides were analyzed in this study [2, 16]. TAT of manual counting was recorded manually.

Statistical analysis

The results of DC-1 and manual counting were expressed as median (interquartile range, IQR) or number (percentage, %). DC-1 preclassified 18 cell classes in each PBS slide. In addition, we evaluated whether DC-1 can count 210 cells on PBS slides spanning normal WBC range including mild leukopenia and mild leukocytosis.

For precision, repeatability was analyzed. The results were expressed as standard deviation (SD) and % coefficients of variation (%CV). Each PBS slide was evaluated by five replicates per day, for five days (5 × 5 experiment design), according to the CLSI EP15-A3 and EP15-Ed3-IG1 guidelines [17, 18]. Thus, DC-1 had 25 replicates per slide with total 125 replicates in this study. Because each PBS slide had a different cell count and WBC differential, precision was evaluated for each slide. %CV were interpreted as follows: %CV≤10%, excellent; %CV 10–20%, good; %CV 20–30%, acceptable; %CV>30%, poor [19].

On the basis of verification, the qualitative performance of preclassification was evaluated by sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and efficiency according to the CLSI EP12-A2 guidelines [20].

For comparison, Bland-Altman plot and Passing-Bablok regression analysis were used according to the CLSI EP 09C-ED3 guidelines [21]. Preclassification, verification, and manual counting were analyzed using Bland-Altman plot, and 18 cell classes were compared. Because manual counting is the gold standard for WBC differential, correlation of preclassification and verification with manual counting were analyzed. Correlation of preclassification with verification was also analyzed. Using Passing-Bablok regression, six WBC classes (segmented neutrophils, band neutrophils, lymphocytes, monocytes, eosinophils, and basophils) were compared. Pearson’s correlation coefficients (r) with 95% confidence interval (CI) were obtained and interpreted as follows: <0.30, negligible; 0.30–0.50, low; 0.50–0.70, moderate; 0.70–0.90, high; 0.90–1.00, very high [22].

TAT (min:s) of DC-1 and manual counting were analyzed with four process steps: first, TAT of preparing and changing PBS slides between run (inserting, removing, and oil immersion); second, TAT of scanning ideal zone under low power (100×); third, TAT of preclassification under high power (1,000×) in DC-1 or TAT of counting cells under high power (1,000×) in manual counting; fourth, TAT of verification in DC-1 or TAT of recording results in manual counting. Total TAT was calculated by sum of four process steps in DC-1 and manual counting. Total TAT/cell (s) was calculated by dividing total TAT with total cell counts of 18 cell classes.

Statistical analysis was performed using Microsoft Excel 2016 (Microsoft, Seattle, WA, USA) and MedCalc Statistical Software (version 20.011; MedCalc Software, Ostend, Belgium). Statistical analysis was considered significant, if two-sided p-values were less than 0.05.

Results

In five PBS slides spanning normal WBC range including mild leukopenia and mild leukocytosis, XN-9000 showed no flags. DC-1 successfully counted 210 cells (except plasma cells and non-WBC classes) per slide; when plasma cells and non-WBC classes are included, DC-1 counted 222 cells (IQR, 220–230 cells) per slide.

For repeatability. %CV was excellent in all 18 cell classes (Table 1). Segmented neutrophils showed the lowest %CV (0.0%), and plasma cells showed the highest %CV (3.5%). Five WBC differentials (segmented neutrophils, lymphocytes, monocytes, eosinophils, and basophils) showed less than 1.0% of %CV. In mild leukopenic PBS slide, the plasma cells, nRBCs, and giant platelet were not available because there was no counted cell. Promyelocytes and nRBCs were also not available in mild leukocytotic PBS slide.

Table 1:

Precision of DC-1 pre-classification (n=125).

Cell class Number of cellsa Mean value of cells per slide Repeatability
SD %CV
Segmented neutrophils 4,060 (3,835.3–4,574) 162.4 (153.4–183) 1.8 (1.6–2) 0 (0–0)
Band neutrophils 9 (6.8–74.5) 0.4 (0.3–3) 0.5 (0.4–0.6) 0.8 (0.2–1)
Lymphocytes 630 (414.3–640.3) 25.2 (16.6–25.6) 1.4 (1.1–1.4) 0.1 (0.1–0.1)
Variant lymphocytes 225 (0–231.8) 9 (0–9.3) 1.8 (0–1.9) 0.2 (0.2–0.2)
Monocytes 114 (68.8–114) 4.6 (2.8–4.6) 0.8 (0.8–0.8) 0.2 (0.2–0.3)
Eosinophils 27 (18.8–51) 1.1 (0.8–2) 0.3 (0.3–1) 0.3 (0.3–0.4)
Basophils 24 (17.3–30.3) 1 (0.7–1.2) 0.2 (0.2–0.2) 0.2 (0.2–0.2)
Metamyelocytes 1 (0.8–2.8) 0 (0–0.1) 0.2 (0.2–0.3) 3.2 (3.2–4.1)
Myelocytes 3 (1.5–8.8) 0.1 (0.1–0.4) 0.3 (0.2–0.4) 1.7 (1.6–2.6)
Promyelocytes 0 (0–1.3) 0 (0–0.1) 0 (0–0.1) 3.2 (2–3.5)
Blasts 16 (0–20) 0.6 (0–0.8) 0 (0–0.2) 0.5 (0.5–0.7)
Plasma cells 0 (0–0.5) 0 (0–0) 0 (0–0.1) 3.5 (3.5–3.5)
nRBCs 0 (0–0) 0 (0–0) 0 (0–0) 2.3 (2.3–2.3)
Smudge cells 221 (210–247.5) 8.8 (8.4–9.9) 1.6 (1.4–1.7) 0.2 (0.1–0.2)
Artifact 14 (0.8–14) 0.6 (0–0.6) 0.5 (0.2–0.5) 1.6 (0.9–3.3)
Unidentified 36 (31.5–36.5) 1.4 (1.3–1.5) 1 (1–1.1) 0.5 (0.5–0.8)
Platelet aggregation 14 (7–26) 0.6 (0.3–1) 0.6 (0.3–0.8) 0.8 (0.7–0.9)
Giant platelet 1 (0.5–17) 0 (0–0.7) 0.2 (0.1–0.4) 2.7 (1.6–3.9)
  1. aNumber of cells are total cell counts of preclassification. SD, standard deviation; %CV, %coefficients of variation; nRBCs, nucleated red blood cells.

The sensitivity and efficiency showed high values (range, 87.2–99.4% and 99.3–100.0%, respectively) except seven cell classes, including metamyelocytes, myelocytes, promyelocytes, blasts, plasma cells, nRBCs, and unidentified. In five WBC differentials, same classes showed similar values (range, 88.6–99.4% and 99.3–100.0%, respectively). The sensitivity and efficiency of seven cell classes were not available. The specificity and NPV showed high values in all 18 cell classes (range, 98.9–100.0% and 98.4–100.0%, respectively). In five WBC differentials, same classes showed similar values (range, 98.9–100% and 97.7–100%, respectively). The PPV showed high values (range, 90.2–100.0%) except seven cell classes (range, 0–0%) (Table 2).

Table 2:

Performance of DC-1 pre-classification on the basis of verification (n=125).

Cell class Number of cellsa Sensitivity Specificity Positive predictive value Negative predictive value Efficiency
Segmented neutrophils 20,075 98.9 (98.8–99.1) 98.9 (98.7–99.1) 99.5 (99.4–99.6) 97.7 (97.4–98) 98.9 (98.8–99)
Band neutrophils 185 88.6 (83.2–92.8) 100 (100–100) 98.8 (95.3–99.7) 99.9 (99.9–100) 99.9 (99.9–100)
Lymphocytes 3,632 94.8 (94–95.5) 100 (99.9–100) 99.8 (99.6–99.9) 99.3 (99.2–99.4) 99.3 (99.2–99.4)
Variant lymphocytes 927 93.2 (91.4–94.7) 100 (100–100) 99.8 (99.1–99.9) 99.8 (99.7–99.8) 99.8 (99.7–99.8)
Monocytes 1,183 93.9 (92.4–95.2) 99.9 (99.9–99.9) 97.7 (96.7–98.4) 99.7 (99.7–99.8) 99.7 (99.6–99.7)
Eosinophils 328 99.4 (97.8–99.9) 100 (100–100) 99.1 (97.2–99.7) 100 (100–100) 100 (100–100)
Basophils 171 98.8 (95.8–99.9) 100 (99.9–100) 97.1 (93.4–98.8) 100 (100–100) 100 (99.9–100)
Metamyelocytes 0 NA 99.9 (99.8–99.9) 0 (0–0) 100 (100–100) NA
Myelocytes 0 NA 99.8 (99.8–99.9) 0 (0–0) 100 (100–100) NA
Promyelocytes 0 NA 100 (100–100) 0 (0–0) 100 (100–100) NA
Blasts 0 NA 99.8 (99.7–99.8) 0 (0–0) 100 (100–100) NA
Plasma cells 0 NA 100 (100–100) 0 (0–0) 100 (100–100) NA
nRBCs 0 NA 100 (100–100) 0 (0–0) 100 (100–100) NA
Smudge cells 2,510 90 (88.8–91.2) 100 (100–100) 100 (100–100) 98.4 (98.2–98.6) 98.6 (98.4–98.8)
Artifact 78 87.2 (77.7–93.7) 100 (100–100) 93.2 (84.9–97) 100 (99.9–100) 99.9 (99.9–100)
Unidentified 0 NA 99.2 (99.1–99.3) 0 (0–0) 100 (100–100) NA
Platelet aggregation 139 99.3 (96.1–100) 100 (100–100) 99.3 (95.1–99.9) 100 (100–100) 100 (100–100)
Giant platelet 56 98.2 (90.4–100) 100 (100–100) 90.2 (80.5–95.3) 100 (100–100) 100 (100–100)
  1. aNumber of cells are total cell counts of verification. NA, not available.

The absolute mean differences (%) ranged from 0.00 to 0.73 in preclassification vs. verification; 0.01–2.74 in preclassification vs. manual counting; 0.00–2.16 in verification vs. manual counting (Table 3). Preclassification and manual counting showed very high correlation in segmented neutrophils (r=0.96); high correlations in lymphocytes (r=0.87), monocytes (r=0.88), eosinophils (r=0.73) and basophils (r=0.86); moderate correlation in band neutrophils (r=0.63). Verification and manual counting showed improved correlations but remained in the same interpretation groups (r=0.63–0.96). Preclassification and verification showed very high correlations in those cell classes (r=0.95–0.99) (Figure 1).

Table 3:

Comparison of cell classification between DC-1 and manual counting (n=125).

Cell class Number of cells Mean difference (%, 95% CI)
Pre-classification Verification Manual counting Pre-classification vs. verification Pre-classification vs. manual counting Verification vs. manual counting
Segmented neutrophils 19,957 20,075 18,757 0.42 (−1.41 to 2.24) 0.09 (−5.93 to 6.11) −0.33 (−5.82 to 5.17)
Band neutrophils 166 185 365 −0.06 (−0.56 to 0.45) −0.79 (−2.35 to 0.78) −0.73 (−2.3 to 0.84)
Lymphocytes 3,450 3,632 3,917 −0.58 (−1.56 to 0.4) −2.74 (−7.86 to 2.38) −2.16 (−7.21 to 2.89)
Variant lymphocytes 866 927 736 −0.2 (−0.66 to 0.26) 0.23 (−1.58 to 2.04) 0.43 (−1.51 to 2.37)
Monocytes 1,137 1,183 825 −0.16 (−1.06 to 0.74) 0.82 (−1.72 to 3.37) 0.98 (−1.94 to 3.91)
Eosinophils 329 328 259 0.01 (−0.12 to 0.13) 0.2 (−1.44 to 1.83) 0.19 (−1.42 to 1.8)
Basophils 174 171 145 0.01 (−0.18 to 0.2) 0.07 (−0.58 to 0.71) 0.06 (−0.61 to 0.72)
Metamyelocytes 37 0 0 0.13 (−0.33 to 0.59) 0.13 (−0.33 to 0.59) 0 (0–0)
Myelocytes 55 0 0 0.2 (−0.32 to 0.72) 0.2 (−0.32 to 0.72) 0 (0–0)
Promyelocytes 6 0 0 0.02 (−0.17 to 0.21) 0.02 (−0.17 to 0.21) 0 (0–0)
Blasts 73 0 0 0.25 (−0.32 to 0.82) 0.25 (−0.32 to 0.82) 0 (0–0)
Plasma cells 4 0 0 0.01 (−0.13 to 0.15) 0.01 (−0.13 to 0.15) 0 (0–0)
nRBCs 4 0 0 0.01 (−0.12 to 0.14) 0.01 (−0.12 to 0.14) 0 (0–0)
Smudge cells 2,260 2,510 1,903 −0.81 (−2.37 to 0.75) 0.48 (−4.14 to 5.1) 1.28 (−3.34 to 5.91)
Artifact 73 78 16 −0.01 (−0.21 to 0.19) 0.19 (−1.16 to 1.54) 0.2 (−1.15 to 1.56)
Unidentified 220 0 0 0.73 (−0.25 to 1.72) 0.73 (−0.25 to 1.72) 0 (0–0)
Platelet aggregation 139 139 109 0 (−0.01 to 0.02) 0.06 (−0.74 to 0.87) 0.06 (−0.75 to 0.87)
Giant platelet 61 56 48 0.02 (−0.14 to 0.18) 0.02 (−0.51 to 0.56) 0.01 (−0.49 to 0.51)
  1. CI, confidence interval.

Figure 1: 
Comparison of DC-1 and manual counting (n=125). Solid line, Passing-Bablok regression; dashed line, 95% CI line; dotted line, identity line. CI, confidence interval.
Figure 1:

Comparison of DC-1 and manual counting (n=125). Solid line, Passing-Bablok regression; dashed line, 95% CI line; dotted line, identity line. CI, confidence interval.

The TAT (min:s) of preclassification was 6:48 in DC-1, and that of counting cells was 8:09 in manual counting. TAT of verification was 1:29 in DC-1, which was not necessary in manual counting. Thus, total TAT was 8:55 in both DC-1 and manual counting. Overall TAT/cell (s) was 2.4 in DC-1 and 2.5 in manual counting (Table 4).

Table 4:

Comparison of turnaround time between DC-1 and manual counting.

Process stepa TAT, min:s (median, IQR)b
Overall (n=125) Sample 1 (n=25) Sample 2 (n=25) Sample 3 (n=25) Sample 4 (n=25) Sample 5 (n=25)
DC-1 analysis A 0:28 (0:26–0:30) 0:28 (0:27–0:29) 0:29 (0:27–0:32) 0:27 (0:25–0:30) 0:28 (0:27–0:29) 0:28 (0:25–0:31)
B 0:29 (0:28–0:30) 0:30 (0:30–0:30) 0:29 (0:28–0:30) 0:29 (0:29–0:35) 0:27 (0:26–0:28) 0:28 (0:28–0:29)
C 6:48 (6:04–8:28) 6:48 (6:46–6:51) 8:47 (8:34–8:52) 4:46 (4:43–4:47) 8:28 (8:16–8:37) 5:05 (5:04–5:07)
D 1:29 (1:09–1:45) 1:05 (1:01–1:08) 1:39 (1:37–1:43) 1:31 (1:28–1:34) 1:09 (1:04–1:18) 2:39 (2:35–2:48)
Total TAT 8:55 (8:38–10:41) 8:52 (8:49–8:55) 11:25 (11:03–11:36) 7:14 (7:10–7:24) 10:34 (10:15–10:45) 8:43 (8:38–8:52)
TAT/cell, s 2.4 (2.3–2.6) 2.4 (2.4–2.4) 2.5 (2.4–2.6) 2.0 (2.0–2.0) 2.9 (2.8–2.9) 2.3 (2.3–2.3)
Manual count E 0:11 (0:10–0:12) 0:11 (0:10–0:11) 0:11 (0:10–0:11) 0:11 (0:10–0:12) 0:11 (0:11–0:12) 0:11 (0:10–0:12)
F 0:19 (0:18–0:19) 0:18 (0:18–0:19) 0:19 (0:18–0:21) 0:19 (0:18–0:21) 0:18 (0:18–0:19) 0:18 (0:18–0:19)
G 8:09 (5:56–10:28) 8:09 (8:03–8:19) 10:28 (10:21–10:34) 5:48 (5:42–5:57) 10:38 (10:28–10:46) 5:56 (5:51–6:04)
H 0:16 (0:15–0:18) 0:16 (0:15–0:17) 0:16 (0:15–0:18) 0:16 (0:16–0:18) 0:16 (0:15–0:18) 0:17 (0:16–0:18)
Total TAT 8:55 (8:03–11:14) 8:55 (8:48–9:03) 11:14 (11:07–11:22) 6:36 (6:28–6:43) 11:24 (11:19–11:30) 6:42 (6:37–6:50)
TAT/cell, s 2.5 (1.8–2.8) 2.5 (2.5–2.6) 2.8 (2.8–2.8) 1.9 (1.9–1.9) 3.2 (3.2–3.3) 1.8 (1.8–1.8)
  1. aA and E evaluated TAT of preparing and changing PBS slides between run (inserting, removing, and oil immersion), B and F evaluated TAT of scanning ideal zone, C evaluated TAT of preclassification, G evaluated TAT of cell counting, D evaluated TAT of verification, H evaluated TAT of recording results in manual counting. Total TAT means sum of four process steps in each method (DC-1 analysis and manual counting), and TAT/cell (s) was calculated by dividing total TAT with total cell counts of 18 cell classes on each PBS slide. bWBC counts in each sample were: sample 1, 3.90 × 109/L; sample 2, 5.19 × 109/L; sample 3, 6.72 × 109/L; sample 4, 8.72 × 109/L; sample 5, 10.78 × 109/L. TAT, turnaround time; IQR, interquartile range.

Discussion

In this study, we comprehensively evaluated the performance of newly launched DC-1, including precision, qualitative performance, comparison of cell counting between DC-1 and manual counting, and TAT. DC-1 has been evaluated in one previous study, in terms of accuracy, within-run imprecision, and clinical performance (sensitivity, specificity, and agreement for WBC classification); in that study, however, the evaluation was limited to the six major cell classes, and the analytical performance was not explored [15]. Our study is different from the previous study in that we explored the analytical performance of each of the 18 cell classes provided by DC-1, compared both DC-1 pre-classification and verification with manual counting, and assessed laboratory efficiency in terms of TAT.

The %CV showed excellent precision on all PBS slides. In addition, %CV showed higher precision in five WBC differentials. However, several cell classes could not be compared whether the values would be changed according to WBC counts because there was no counted cell in the mild leukopenic and mild leukocytotic PBS slides (Table 1). In the previous studies including analysis of pathologic samples, precision tended to be lower on PBS slides having fewer WBC counts especially in moderate and severe leukopenia [23], [24], [25], [26], [27]. The %CV might have been underestimated, because this study only analyzed PBS slides spanning normal WBC range without pathologic samples.

In our data, the specificity and NPV was high, and these results are similar to previous findings evaluating other DM analyzers. However, the sensitivity and efficiency were not available in seven cell classes (metamyelocytes, myelocytes, promyelocytes, blasts, plasma cells, nRBCs, and unidentified), because all preclassified cells by DC-1 were verified to other cells (Table 2). This finding implies that DC-1 might make unnecessary workload, which requires verification by hematology experts even in normal samples. These results suggest that like other DM analyzers, preclassified results by DC-1 should be verified by hematologic experts as recommended by the ICSH guidelines [2]. The PPV showed low values in seven cell classes; because each PBS slide underwent 25 replicates according to the CLSI guidelines, the wrong counted cell may be multiplied and make the PPV low with statistical error. In addition, the data on several cell classes were not available because those cell classes did not exist on the PBS slides that were used in this study.

The absolute mean differences were acceptable in all cell classes. Preclassification vs. manual counting showed the highest absolute mean differences, and preclassification vs. verification showed the lowest value (Table 3). In the previous studies including analysis of pathologic samples, absolute mean differences showed larger differences in fewer WBC counts especially in moderate and severe leukopenia [11, 27]. Absolute mean differences might have been underestimated in this study without pathologic samples.

Preclassification vs. manual counting showed high to very high correlation in five cell classes (segmented neutrophils, lymphocytes, monocytes, eosinophils, basophils) but showed moderate correlation in band neutrophils. Verification vs. manual counting showed same interpretation. However, in preclassification vs. verification, all cell classes showed very high correlation (Figure 1). When band and segmented neutrophils are not counted separately, neutrophils (band plus segmented neutrophils) showed same correlation coefficient of segmented neutrophils. Because band neutrophils have a small cell proportion, the detection would be highly affected by the reading zone on the PBS slide. This might result in lower correlation in DC-1 vs. manual counting, for which PBS slides might be read in different ideal zone than preclassification vs. verification, for which PBS slides might be read in the same ideal zone [12, 18, 28].

In addition, these results are different from previous findings that showed lower correlation in basophils. In previous studies, the reported correlation coefficients were as follows: 0.54 in DI-60 verification vs. manual counting; 0.36 in Vision Pro verification vs. manual counting; 0.76 in DM96 vs. manual counting [11, 27, 29]. These results might be due to the small number of basophils like band neutrophils, because the previous studies included moderate and severe leukopnenic PBS slides.

Total TAT showed no tendency when WBC counts were changed in both DC-1 and manual counting, and it showed the longest value in sample 2 and the shortest value in sample 3. Overall TAT of preclassification in DC-1 showed shorter time than overall TAT of cell counting in manual counting; however, overall total TAT and overall TAT/cell showed no difference between DC-1 and manual counting (Table 4). Because TAT of preclassification and verification constitute a large portion of total TAT in DC-1, the TAT of verification lengthened the total TAT. On the contrary, only counting cells constitute a large portion of total TAT in manual counting without verification process. Of note, there were two error events that could not scan ideal zones using DC-1. The errors were caused due to few drops of immersion oil, and this made the TAT being increased. In a previous study that compared TAT between a DM analyzer and manual counting, DM analyzer showed longer TAT especially in samples with severe leukopenia [30]. TAT might have been underestimated in our study because we used only PBS slides spanning normal WBC range without pathologic samples. Our results showed that DC-1 is not superior to manual counting in terms of TAT. So, it is questionable that DC-1 could increase laboratory efficiency. In the context of telemedicine, however, it has the potential to provide assist of manpower at small to medium-volume laboratories, which have trouble with lack of hematology experts. Evaluating laboratory efficiency using comparative assessment of risk and TAT would be an area of future research.

This study has several limitations. First, only RAL stain was used for PBS preparation. Other staining methods should be compared for performance evaluation because the performance of DM analyzers may vary according to staining methods [31]. The CLSI H20-A2 guidelines recommend manual counting on a Romanowsky-stained PBS slides, but we used RAL-stained PBS slides for manual counting [2, 16]. Second, the SP-10, which is usually used in high-volume laboratories was used for PBS preparation rather than RAL SmearBox and RAL StainBox which are usually used in small-to medium-volume laboratories. Thus, the TAT of PBS preparation using RAL SmearBox and RAL StainBox should be further compared with TAT of PBS preparation using SP-10. Third, this study was performed using only normal samples spanning normal WBC counts. Because this study did not consider pathologic samples including moderate and severe leukopenia, the performance of DC-1 might have been overestimated. PBS review can be initiated by the laboratory staff based on CBC findings or requested by the clinician, and individual laboratories can develop or set their own review criteria [16]. In addition to abnormal samples, normal samples can be reviewed to resolve clinicians’ concerns pertaining to specific patient populations and for the purpose of quality control/quality assurance and technical/educational considerations [1, 8]. In DM analyzers, freshly stained blood samples with normal WBC counts are also used for the quality control of cell location performance [8]. Accordingly, in-depth analysis of the qualitative performance of DM analyzers using normal samples would be the basic approach to guarantee good laboratory practice, especially because there is no other commercially available QC material for daily use for this test [1]. Moreover, although only five samples were included in this study, the actual analysis was based not on the five slides but on the 125 replicated measurements, and a total of 56,364 cells were counted for both DC-1 verification and manual counting; it was an enough number of measurements to explore the analytical performance of DC-1. Considering that DC-1 is a stand-alone system and slides need to be manually loaded one at a time, such a task performed by the hematology expert was labor-intensive and technically-demanding [15]. Fourth, DC-1 undergo preclassification only in ideal zone of PBS slides, which can miss pathologic cells located in other sites. The DM analyzer proceeding full field review of PBS slides might solve this problem [32]; there is a need to compare the performance of DC-1 with DM analyzer proceeding full field review of PBS slides.

In conclusion, this is the first study that comprehensively evaluated the performance of DC-1 on PBS slides spanning normal WBC range. This study showed that DC-1 has a reliable analytical performance in all cell classes, and it can be used in small-to medium-volume laboratories for providing assist of manpower in daily practice of PBS review. However, DC-1 might make unnecessary workload requiring verification by hematology experts even in normal samples, and TAT of DC-1 was not superior to that of manual counting, leaving questions whether DC-1 would indeed improve laboratory efficiency. Our findings imply that implementing DM analyzer itself cannot guarantee improved laboratory workflow, and further optimization would be warranted considering each laboratory’s situation and unmet need. Our study based on normal samples would stimulate further studies using a wide array of pathological samples.


Corresponding author: Mina Hur, MD, PhD, Department of Laboratory Medicine, Konkuk University School of Medicine, Konkuk University Medical Center, 120-1 Neungdong-ro, Gwangjin-gu, Seoul 05030, Korea, Phone: +82-2-2030-5581, E-mail:

  1. Research funding: This paper was supported by Konkuk University in 2022.

  2. Author contributions: Lee GH collected the samples, analyzed the data, and wrote the draft; Hur M conceived the study and finalized the draft; Yoon S, Nam M, and Kim H collected the samples and analyzed the data. All authors have accepted responsibility for the entire content of this manuscript and approved its submission.

  3. Competing interests: Authors state no conflict of interest.

  4. Informed consent: In this study, getting informed consent was exempt because additional blood sample collection or intervention were not required.

  5. Ethical approval: This study was conducted according to the Declaration of Helsinki, and the study protocol was approved by the Institution Review Board of KUMC (2022-02-050 in 02 March 2022).

References

1. Gulati, G, Song, J, Florea, AD, Gong, J. Purpose and criteria for blood smear scan, blood smear examination, and blood smear review. Ann Lab Med 2013;33:1–7. https://doi.org/10.3343/alm.2013.33.1.1.Search in Google Scholar PubMed PubMed Central

2. Kratz, A, Lee, SH, Zini, G, Riedl, JA, Hur, M, Machin, S, et al.. Digital morphology analyzers in hematology: ICSH review and recommendations. Int J Lab Hematol 2019;41:437–47. https://doi.org/10.1111/ijlh.13042.Search in Google Scholar PubMed

3. La Gioia, A, Fiorini, F, Fumi, M, Fiorini, M, Pancione, Y, Rocco, L, et al.. A prolonged microscopic observation improves detection of underpopulated cells in peripheral blood smears. Ann Hematol 2017;96:1749–54. https://doi.org/10.1007/s00277-017-3073-z.Search in Google Scholar PubMed

4. International Council for Standardization in Haematology, WG, Briggs, C, Culp, N, Davis, B, d’Onofrio, G, Zini, G, et al.. ICSH guidelines for the evaluation of blood cell analysers including those used for differential leucocyte and reticulocyte counting. Int J Lab Hematol 2014;36:613–27. https://doi.org/10.1111/ijlh.12201.Search in Google Scholar PubMed

5. Briggs, C, Longair, I, Slavik, M, Thwaite, K, Mills, R, Thavaraja, V, et al.. Can automated blood film analysis replace the manual differential? An evaluation of the CellaVision DM96 automated image analysis system. Int J Lab Hematol 2009;31:48–60. https://doi.org/10.1111/j.1751-553x.2007.01002.x.Search in Google Scholar PubMed

6. Hur, M, Cho, JH, Kim, H, Hong, MH, Moon, HW, Yun, YM, et al.. Optimization of laboratory workflow in clinical hematology laboratory with reduced manual slide review: comparison between Sysmex XE-2100 and ABX Pentra DX120. Int J Lab Hematol 2011;33:434–40. https://doi.org/10.1111/j.1751-553x.2011.01306.x.Search in Google Scholar

7. Lippi, G, Da Rin, G. Advantages and limitations of total laboratory automation: a personal overview. Clin Chem Lab Med 2019;57:802–11. https://doi.org/10.1515/cclm-2018-1323.Search in Google Scholar PubMed

8. Kim, HN, Hur, M, Kim, H, Kim, SW, Moon, HW, Yun, YM. Performance of automated digital cell imaging analyzer Sysmex DI-60. Clin Chem Lab Med 2017;56:94–102. https://doi.org/10.1515/cclm-2017-0132.Search in Google Scholar PubMed

9. Kratz, A, Bengtsson, HI, Casey, JE, Keefe, JM, Beatrice, GH, Grzybek, DY, et al.. Performance evaluation of the CellaVision DM96 system: WBC differentials by automated digital image analysis supported by an artificial neural network. Am J Clin Pathol 2005;124:770–81. https://doi.org/10.1309/xmb9k0j41lhlatay.Search in Google Scholar

10. Tabe, Y, Yamamoto, T, Maenou, I, Nakai, R, Idei, M, Horii, T, et al.. Performance evaluation of the digital cell imaging analyzer DI-60 integrated into the fully automated Sysmex XN hematology analyzer system. Clin Chem Lab Med 2015;53:281–9. https://doi.org/10.1515/cclm-2014-0445.Search in Google Scholar PubMed

11. Yoon, S, Hur, M, Park, M, Kim, H, Kim, SW, Lee, TH, et al.. Performance of digital morphology analyzer Vision Pro on white blood cell differentials. Clin Chem Lab Med 2021;59:1099–106. https://doi.org/10.1515/cclm-2020-1701.Search in Google Scholar PubMed

12. AAFP. Classifications for small, medium, and high volume labs, and specialty labs. Available from: https://www.aafp.org/family-physician/practice-and-career/managing-your-practice/clia/lab-classifications.html [Accessed 9 Jan 2022].Search in Google Scholar

13. Simson, E, Gascon-Lema, MG, Brown, DL. Performance of automated slidemakers and stainers in a working laboratory environment - routine operation and quality control. Int J Lab Hematol 2010;32:e64–76. https://doi.org/10.1111/j.1751-553x.2009.01141.x.Search in Google Scholar

14. Rosetti, M, De la Salle, B, Farneti, G, Clementoni, A, Poletti, G, Dorizzi, RM. The added value of digital morphological analysis in the evaluation of peripheral blood films: the report of an UKNEQAS external quality assessment sample. Ann Hematol 2022;101:729–30. https://doi.org/10.1007/s00277-021-04595-9.Search in Google Scholar PubMed

15. van der Vorm, LN, Hendriks, HA, Smits, SM. Performance of the CellaVision DC-1 digital cell image analyser for differential counting and morphological classification of blood cells. J Clin Pathol 2021. https://doi.org/10.1136/jclinpath-2021-207863 [Online ahead of print].Search in Google Scholar PubMed

16. Clinical and Laboratory Standards Institute (CLSI). Reference leukocytes (WBC) differential count (proportional) and evaluation of instrumental methods: approval standard. In: CLSI document H20-A2, 2nd ed. Wayne, PA: CLSI; 2007.Search in Google Scholar

17. CLSI. User verification of precision implementation guide. In: CLSI implementation guide EP15-Ed3-IG1, 1st ed. Wayne, PA, USA: Clinical and Laboratory Standards Institute; 2021.Search in Google Scholar

18. CLSI. User verification of precision and estimation of bias; approved guideline. In: CLSI document EP15-A3, 3rd ed. Wayne, PA, USA: Clinical and Laboratory Standards Institute; 2014.Search in Google Scholar

19. Barnhart, HX, Barboriak, DP. Applications of the repeatability of quantitative imaging biomarkers: a review of statistical analysis of repeat data sets. Transl Oncol 2009;2:231–5. https://doi.org/10.1593/tlo.09268.Search in Google Scholar PubMed PubMed Central

20. Clinical and Laboratory Standards Institute (CLSI). User protocol for evaluation of qualitative test performance. In: CLSI document EP12-A2, 2nd ed. Wayne, PA: CLSI; 2008.Search in Google Scholar

21. Clinical and Laboratory Standards Institute (CLSI). Measurement procedure comparison and bias estimation using patient samples. In: CLSI document EP09c, 3rd ed. Wayne, PA: CLSI; 2008.Search in Google Scholar

22. Mukaka, MM. Statistics corner: a guide to appropriate use of correlation coefficient in medical research. Malawi Med J 2012;24:69–71.Search in Google Scholar

23. Hubl, W, Tlustos, L, Bayer, PM. Use of precision profiles to evaluate precision of the automated leukocyte differential. Clin Chem 1996;42:1068–73. https://doi.org/10.1093/clinchem/42.7.1068.Search in Google Scholar

24. Lippi, G, Nicoli, M, Modena, N, Guidi, G. Clinical performance of leukocyte differential on the new Roche Cobas Vega haematological analyzer. Eur J Clin Chem Clin Biochem 1997;35:105–10.Search in Google Scholar

25. Rumke, CL. Imprecision of ratio-derived differential leukocyte counts. Blood Cell 1985;11:311–4, 5.Search in Google Scholar

26. Vives-Corrons, JL, Besson, I, Jou, JM, Gutierrez, G. Evaluation of the Abbott Cell-DYN 3500 hematology analyzer in university hospital. Am J Clin Pathol 1996;105:553–9. https://doi.org/10.1093/ajcp/105.5.553.Search in Google Scholar PubMed

27. Yoon, S, Hur, M, Lee, GH, Nam, M, Kim, H. How reproducible is the data from Sysmex DI-60 in leukopenic samples? Diagnostics (Basel) 2021;11:2173. https://doi.org/10.3390/diagnostics11122173.Search in Google Scholar PubMed PubMed Central

28. Lee, LH, Mansoor, A, Wood, B, Nelson, H, Higa, D, Naugler, C. Performance of CellaVision DM96 in leukocyte classification. J Pathol Inf 2013;4:14. https://doi.org/10.4103/2153-3539.114205.Search in Google Scholar PubMed PubMed Central

29. Park, SH, Park, CJ, Choi, MO, Kim, MJ, Cho, YU, Jang, S, et al.. Automated digital cell morphology identification system (CellaVision DM96) is very useful for leukocyte differentials in specimens with qualitative or quantitative abnormalities. Int J Lab Hematol 2013;35:517–27. https://doi.org/10.1111/ijlh.12044.Search in Google Scholar PubMed

30. Nam, M, Yoon, S, Hur, M, Lee, GH, Kim, H, Park, M, et al.. Digital morphology analyzer Sysmex DI-60 vs. manual counting for white blood cell differentials in leukopenic samples: a comparative assessment of risk and turnaround time. Ann Lab Med 2022;42:398–405. https://doi.org/10.3343/alm.2022.42.4.398.Search in Google Scholar PubMed PubMed Central

31. Kim, HN, Hur, M, Kim, H, Park, M, Kim, SW, Moon, HW, et al.. Comparison of three staining methods in the automated digital cell imaging analyzer Sysmex DI-60. Clin Chem Lab Med 2018;56:e280–3. https://doi.org/10.1515/cclm-2018-0539.Search in Google Scholar PubMed

32. Katz, BZ, Feldman, MD, Tessema, M, Benisty, D, Toles, GS, Andre, A, et al.. Evaluation of Scopio Labs X100 Full Field PBS: the first high-resolution full field viewing of peripheral blood specimens combined with artificial intelligence-based morphological analysis. Int J Lab Hematol 2021;43:1408–16. https://doi.org/10.1111/ijlh.13681.Search in Google Scholar PubMed PubMed Central

Received: 2022-08-04
Accepted: 2022-09-26
Published Online: 2022-10-31
Published in Print: 2023-01-27

© 2022 the author(s), published by De Gruyter, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 2.5.2024 from https://www.degruyter.com/document/doi/10.1515/cclm-2022-0829/html
Scroll to top button