Skip to content
Publicly Available Published by De Gruyter April 12, 2022

Artificial intelligence in laboratory medicine: fundamental ethical issues and normative key-points

  • Federico Pennestrì EMAIL logo and Giuseppe Banfi

Abstract

The contribution of laboratory medicine in delivering value-based care depends on active cooperation and trust between pathologist and clinician. The effectiveness of medicine more in general depends in turn on active cooperation and trust between clinician and patient. From the second half of the 20th century, the art of medicine is challenged by the spread of artificial intelligence (AI) technologies, recently showing comparable performances to flesh-and-bone doctors in some diagnostic specialties. Being the principle source of data in medicine, the laboratory is a natural ground where AI technologies can disclose the best of their potential. In order to maximize the expected outcomes and minimize risks, it is crucial to define ethical requirements for data collection and interpretation by-design, clarify whether they are enhanced or challenged by specific uses of AI technologies, and preserve these data under rigorous but feasible norms. From 2018 onwards, the European Commission (EC) is making efforts to lay the foundations of sustainable AI development among European countries and partners, both from a cultural and a normative perspective. Alongside with the work of the EC, the United Kingdom provided worthy-considering complementary advice in order to put science and technology at the service of patients and doctors. In this paper we discuss the main ethical challenges associated with the use of AI technologies in pathology and laboratory medicine, and summarize the most pertaining key-points from the guidelines and frameworks before-mentioned.

Introduction

The contribution of laboratory medicine in delivering value-based care largely depends on active cooperation and trust between pathologist and clinician [1]. The effectiveness of medicine more in general depends in turn on active cooperation and trust between clinician and patient. According to Ippocrates, medicine is the battle of patient and physician against disease, where the physician “serves the art, and the patient leads the battle” [2]. Medicine is more than medical science and technology: its value relies in the physician’s ability to take advantage of science and technology to improve the patient wellbeing. This is something that should never be taken for granted, considering how supply-driven innovation can divert important resources from real problems, experienced by real patients, in a real word [3], how increasing healthcare expenditure is associated with decreasing benefits perceived by patients [4], and how decreasing marginal benefits are largely explained by the rising costs of technology itself (assessment, purchase, human training, regulation) [4, 5]. What the patient’s wellbeing is – what each different patient’s wellbeing is – needs to be investigated, translated in appropriate treatments and monitored with appropriate outcomes under the therapeutic alliance of patient and doctor.

From the second half of the 20th century onwards, the art of medicine is challenged by the spread of artificial intelligence (AI) technologies [6], which achieve comparable performances to flesh-and-bone doctors in some diagnostic specialties [7]. AI technologies offer great opportunities to doctors and patients [8, 9], provided these opportunities do not turn the ethical and epistemological foundations of the art on their head (improving the patient’s wellbeing and choosing the best possible treatments given the information currently available) [10], [11], [12]. On the one side, promising technologies like machine learning are capable of processing an amount of data well beyond the reach of a single human mind, improving diagnostic accuracy, detecting diseases before they express, improving prevention, designing patient-centred care pathways, enhancing epidemiology, supporting population health management, and reducing the negative impact of social determinants of health [9, 13], [14], [15], [16]. On the other side, the same technologies pose some critical threats to patient privacy and safety, care providers safety from liability, opportunities for employment, patient engagement, clinician trust and scientific progress itself [17], [18], [19].

AI technologies can disclose the best of their potential in laboratory medicine, supporting clinical decision-making (diagnoses, prognoses, treatments), research (drug testing and development, precision medicine, bio-banks) and health policy (epidemiology and evidence-based resources allocation). In order to maximize the expected outcomes and minimize risks, it is crucial to keep the ethical and epistemological foundations of medicine in mind, clarify whether they are enhanced or challenged by the employment of AI technologies, and preserve them underrigorous but feasible norms.

From 2018 onwards, the European Commission (EC) is making efforts to lay the foundations of sustainable AI development among European countries and partners, both from a cultural [18] and a normative perspective [20]. The EC started with the publication of a first draft on “Ethical guidelines for a reliable AI” in April 2019 [21], prosecuted with a “White Paper on AI” in February 2020 [22], and resulted in the “Artificial Intelligence Law Proposal” in April 2021 [20]. After the White Paper was released, six months of open online consultation were offered to all the stakeholders potentially influenced by the same regulation, including public and private institutions, governments, local authorities, business and no-profit companies, social players, experts, scholars and citizens. The goal was to agree on essential baseline arrangements to regulate and developAI technologies by-design, rather than correcting unexpected issues ex-post, being technology more rapidly progressive than processes of regulation. Alongside with the work of the EC, the United Kingdom (UK) Government Department of Health & Social Care delivered “A guide to good practice for digital and data-driven health technologies” [23], providing worthy-considering complementary advice to put science and technology at the service of patients (and their doctors). In this paper the authors discuss the main ethical challenges associated with the use of AI technologies in pathology and laboratory medicine, and summarize the most pertaining key-points from the guidelines and frameworks before-mentioned.

Main ethical issues

The main ethical issues emerging from the use of AI technologies in laboratory medicine stem from the specific role played by laboratory professionals, the automatic elaboration of data and the use of sensitive patient information. The performance of AI technologies highly depends on the quality of inputs, the context in which they are collected and the way they and interpreted (clinical sensitivity and specificity; units of measurement used – i.e., molar or mass; critical parameters chosen; data format; population of interest; international standard compliance; interoperability across different professionals and settings) [24, 25]. However, the quality and validation of inputs is outside the domain of machines. Therefore, the active contribution of laboratory professionals is key to provide accurate data analysis and interpretation across the entire process, possibly supported by a common language for identifying health measurements, observations and documents, such as Logical Observation Identifiers Names and Codes (LOINC) [26], [27], [28]. In addition, most examinations are directly available to patients, with basic reference ranges and semantic markers (i.e., asterisks) enabling outcome interpretation. Considering the impact of many examinations on patient health, psychologic balance and life decisions, qualified professionals, ethical committees and/or scientific associations must be formally involved in the assessment of AI technology accuracy and reliability.

The four principles of biomedical ethics (respect for patient’s autonomy, beneficence, non-maleficence and justice) help analyze these general issues more in detail [29]. Respect for patient’s autonomy means that patients decide what is good for themselves, not in terms of choosing which treatments are technically more appropriate to meet their health-related needs, but helping the doctor understand what needs they find most relevant for their health and quality of life, and choose the best possible solutions. This is key to generate personal and societal value from impersonal treatments and technologies [1]. From the Kantian perspective of the moral imperative (always treat the patient as an end, and never as a means), respect for patient autonomy means that patients have their data employed for their own health, and not for the health of other patients, the interests of insurance providers or the benefit of medical device suppliers.

Patients can obviously agree with having their data employed for further reasons than their own health, once they are aware of these reasons and provide informed consent (i.e., employing residual biological material for the sake of biomedical research) [30]. It should never be taken for granted that the consent provided by patients to employ personal health data to receive certain treatments is itself a consent to employ the same data for other purposes, solidarity included, researchers recall [31]. Finally, patients need to be aware that medical decisions concerning their health are relying on data processed by machines. Is a pathologist who employs IA technologies to interpretate an image covered by the same informed consent given by the patient to obtain that image, or two separated authorizations are needed (one to perform the examination, one to authorize machine-supported interpretation)? Should a patient know that the diagnosis is supported by the use of AI? The problem here is not the fact that a doctor relies on external support to interpret an image, as it normally happens when doctors receive help by colleagues or technicians in making the right diagnosis; the problem is the fact that external support is provided by a machine, that is, ultimately, the extent to which we can rely on a machine to take significant decisions on our health. From a medico-legal perspective, there is no ground yet (no previous episodes, no specific laws) to hold machines responsibile (and liable) for their predictions [32]. This is confirmed and clearly emphasized in the context of driving automation, as the American National Highway Traffic Safety Administration (NHTSA) recommended that commercial vehicles “require the human driver to be in control at all times” (after a fatal car accident was caused by a distracted driver who relied entirely on car automation) [33].

From an ethical perspective, the debate is more rich. Hatherley recently argued against the possibility to trust a machine, considering that trust, in medicine, has a peculiar human connotation [34]. Trust in medicine has both an intrinsic and a functional meaning. From an intrinsic point of view, medicine is structurally based on trust, as patients (by definition vulnerable) and doctors (by definition experts of a certain condition) are tied by a substantial degree of asymmetric information. From a functional point of view, patients who trust their doctors will more easily adhere to prescriptions, reveal sensitive data and feel properly care for. Trust is based on the belief that someone is acting in my interest, or better, my interest in “encapsulated” in the actor, because of the actor vocation, because of personal patient knowledge and/or because of a strong deontological commitment. Trust is more than simple confidence, which is based on the belief that what has happened so far will necessarily happen again. For example, a thief is confident that a victim who leaves home everyday at 8 a.m. is leaving home at the same time on the day of burglary; but hardly we can say that the thief trusts the victim [34]. After seeing the Sun rising in the morning for an entire lifetime, people can count on the fact that the Sun will rise again, tomorrow. But hardly we can say that people trust the Sun, except in an eventual religious sense. Likewise, we can count on a navigation system to drive us home after work, or a machine learning device to take probabilistic decisions from a complex web of information, but hardly we can say that we trust the navigation system or the machine learning device to do their job. Trust presupposes some degree of awareness and good will in part of the trustee. Given these premises, it would be more appropriate to talk about confidence towards machines, in terms of counting on their ability to make instrumental decisions based on the information they have available.

According to Ferrario et al. [35], on the contrary, it is possible to trust a machine, once a conception of trust adequate to their nature has been adopted. For instance, trust can be reduced to lack of control, in the sense that I trust someone or something when I do not feel to monitor their activity continuously. The more I trust someone, the more I count on him or her doing what they are supposed without supervision. From this perspective, the fact that a doctor employs an algorithm without checking its consistency anytime means in a very mild sense that the doctor trusts the machine. If the doctor trusts the examination in a mild sense and the patient trusts the doctor in a strong sense, the patient’s trust in AI is reflected (reflective trust). Similarly, machines can be trusted from an instrumental perspective. The navigation system and the machine learning device before-mentioned are instruments to meet patient expectations and autonomy, as they process a huge amount of information to help the users (driver or patient) meet their own deliberated goals (reaching home through the shortest possible route or achieving a meaningful improvement in physical wellbeing).

In practice, the theoretical question of trust turns into the material question of evaluating risks and benefits, or finding a morally acceptable compromise between beneficence and non-maleficence. The benefits of AI in laboratory medicine are several: the automation of certain procedures can dramatically increase the speed and generalizability of complex analyses and clinical decisions, allowing doctors to perform more examinations or dedicate more time to each patient; relevant information neglected by natural human sight and/or current medical culture can trigger significant prevention and therapy breakthroughs (i.e., correlations between small-sized details in images and exposure to disease, between social determinants and health outcomes, between objective physical features and dominant medical interpretation); patients affected by chronic diseases and/or disability can stay home and have their biomarkers periodically checked with professional supervision remotely. Once all these benefits were substantially proved and AI technologies performed more efficiently than humans, it could be immoral not to employ them.

So far, there are potential harms to be considered first. To begin, it could be easier for a trained doctor to check the prediction of a machine in everyday clinical practice (provided the algorhytm is not too complex) in comparison to managers with no medical expertise, although the latter however employ the same predictions on a daily basis for cost-effectiveness analysis, public policy and allocative decisions. To avoid trivial misunderstandings, flesh-and-bone medical consultation should be guaranteed at all levels. For instance, a machine learning technology found asthma alone to be more deadly than asthma with pneumonia [17], as patients affected by both conditions were undergoing more intensive care due to pneumonia; therefore, the correct interpretation was not “asthma is more dangerous than asthma with pneumonia”, but “pneumonia is more dangerous than asthma”, with multiple decisions following. In the same way, AI technologies can hamper medical progress and feed unconscious but critical biases. “Loop think” or “Group thinking” [18] is the condition in which machines learn on certain inputs reflecting the theories of those scientists and medical doctors who teach them (“this data must be interpreted this way”), or build their inferences on limited data sets characterised by under representation of certain categories of patients, which may suggest erroneous conclusions without considering different possible explanations. For instance, a machine that associates obese or disabled patients with a certain disease can automatically neglect any further pathogenetic explanation beyond disability and obesity alone, while other factors may exist and be treated effectively. According to Normal Daniels, health inequalities are not themselves an expression of social injustice, while health inequalities resulting from avoidable determinants are [36]; from this perspective, substantial efforts to prevent AI technologies from performing systematic discriminations becomes a matter of social justice, just like ensuring AI technologies are not monopolized by for-profit entities unsensitive to these issues.

A moral use of technology requires costs and benefits to be distributed as equally as possible among the community of stakeholders. In low-to-medium income countries, where the combination of poor resources, brain drain, poor spread and healthcare fragmentation can substantially compromise the availability of medical expertise, machine learning can help few qualified professionals to deliver appropriate advice to a considerably higher number of patients, or help less qualified professionals to deliver the same advice relying on technology. In combination with telemedicine covering distances between patients and healthcare professionals, AI technologies can therefore contribute to a substantial reduction in local and global care inequalities. More in general, AI is meant to reduce and not to augment disparities (i.e., those following from the digital divide), protecting vulnerable citizens from potential manipulation (i.e., minors, disabled patients, the elderly). A comprehensive survey on an American students sample of different age, sex, employment and race showed how 50.6% declared to be willing to share and exchange their sensitive genetic data for some questionable purposes, such as being authorized by public agencies to reduce key information available to potential customers, erase business data, or exploit customers’ data they have available for other purposes [31]. If AI data sets are not protected from hazards, they can end up in the hands of private agencies that may deny health insurance based on patient chronic health conditions or family history, for instance. At the same time, the predictive and diagnostic accuracy of some AI examinations can reveal patient’s health details that the patient does not want or does not need to be aware of, as they compromise emotional balance and life plans without adding any therapeutic or preventive value (i.e., the progressive onset of an incurable genetic disease). Would the pathologist be required to share this information with the patient, their family or tutor? Several arrangements can be introduced to preserve the patient’s psychosocial wellbeing and “right not to know” [37], from introducing anticipated patient directives to seeking only for the information strictly necessary to take a certain decision (the so-called Ockham razor) [38]. Similar reasons supported the adoption of a living will to inform end of life care, or authorize embryo selection only for the purpose of disease prevention, and avoid eugenics, in certain European countries. For what concerns artificial intelligence, the more information is available to machines to perform accurate assessments, the more electronic systems are needed to share this information, the more risks to cause unintended harm to patients, as increasing sensitive data leaks may happen in the process. For instance, some biological parameters explaining the progression of a certain cancer can make a fatal prognosis clear to the patient (i.e., six months of life expectancy) more than what is necessary to adopt valuable treatment decisions; pouring additional emotional burden; and compromising the patient ability to maintain an acceptable quality of life in the remaining time. These concerns worsen when we consider how easy it has become to search the Internet for health-related information today, of questionable safety, questionable quality, questionable appropriateness, more sensationalist than necessary and without any mediation by healthcare professionals or patient relatives (i.e., specialist, general practitioner, psychologist, care giver).

The four principles of biomedical ethics help focus on specific requirements to be respected across the entire cycle of design, development, distribution and use of AI technologies in laboratory medicine, laying the foundations for structured clinical, scientific and industrial cooperation.

From ethical issues to contextual norms

Fragmented efforts are being made to translate ethical issues into contextual norms in different countries, both from a top-down and a bottom-up level. For instance, the American Medical Informatics Association introduced a dedicated code of ethics for the researchers and healthcare professionals working with information technology [3139]. In Sweden, a national policy on clinical imaging datasets sharing was reached through a joint consensus between data and medical experts [40]. The European Law Proposal on AI provides a useful horizontal framework valid to general AI applications in sensitive areas of human activities and rights, while the UK Government guidelines provide complementary healthcare advice most of which can be generalized to laboratory medicine and European countries.

To the extent of interest here, the former framework can be summarized in the following principles:

  1. Proportional regulation:

    1. Norms must be as consistent as possible with other existing regulation (either communtiarian or local), to avoid overlapping requirements and useless bureaucracy;

    2. Norms must be consistent with existing regulation on data protection, including the General Data Protection Regulation (GDPR) Act 2018;

    3. Norms must be consistent with the European Union (EU) Charter of Fundamental Rights;

    4. Norms should define clear common requirements to be respected by all developers across the EU, including partners, but leave the same developers free to find the most appropriate technical solutions at this purpose;

    5. Norms should be more or less demanding depending on the risk associated with each specific technology (from prohibition to monitoring to self-management);

    6. Sanctions should be proportional to violation, from administrative fines to court actions;

  2. Clarity of regulation and goals:

    1. Clear definitions (i.e., AI, machine learning, deep learning) are adopted;

    2. Clear risk-classes are outlined;

    3. Algorhythm producers must declare any single purpose and functioning;

  3. Horizontal regulation:

    1. Norms do not apply to specific technologies, but rather are designed to apply inclusters of risk, based on the previous definition of purpose and functioning;

    2. Norms apply equally to EU countries, EU countries exporting technologies to other countries, and EU countries importing technologies from other countries;

  4. Vertical back-and-forth monitoring:

    1. Local developers should subscribe national registries, and along with users, report any ethically-relevant information to a national supervisory agency;

    2. National agencies report information to a central European committee, composed by representatives of each country and the EC; The European committee provides consultancy, identifies best practices, and grants the Commission to adopt sanctions when needed;

    3. A European data-base will be introduced to collect information from high-risk technologies; non-high risk developers take their own initiative to comply with the general framework;

  5. Invest in training:

    1. Build confidence on AI technologies, and clarify how they are designed to improve wellbeing;

    2. Answer plainly to the big questions on AI goals and mechanisms when raised by the public;

    3. Train a new generation of IA professional experts.

Seven of the 12 UK Government guidelines help focus these general principles into specific advice for health care, laboratory medicine included. The remaining five guidelines (data transparency, cybersecurity, regulation, generating evidence and commercial strategy) add no substantial information with respect to the European framework and are more specific to the local context [23]. Therefore, the authors summarize the seven relevant guidelines to this paper.

  1. How to operate ethically:

    1. Data must be employed to the clear benefit of patients and citizens. Developers are responsible to find the technical means to preserve patient rights and privacy, including they are aware to be dealing with machines.

  2. Have a clear value proposition:

    1. Design each device with clear goals for clear users by-default: what issue does it aim to solve? For which recipients? How?

    2. A clear value propositon should meet the following points: problem or need to be solved; how the proposal can solve it; how the solution fits in with existing or innovative healthcare facilities; how effectiveness is evidenced; cost-effectiveness of the solution; capacity to scale; sustainability over time;

    3. Examples of goals and benefits expected to generate value in medical care are improving the accuracy of diagnoses, the effectiveness of integrated care outcomes, adding generalizable knowledge, reducing operational waste;

    4. Each device should involve the expected users by-design, from the early phase to revision (i.e., representers of healthcare providers and patient organisations);

    5. Identify by-design key performance indicators, including biomedical and patient-reported outcomes (PROs); impact on the workforce; impact on the workflow; financial impact; value-for-money;

  3. Usability and accessibility:

    1. No matter how advanced it is, expensive technology can be wasted if too complex to use; intuibility is key to reduce rather than sharpen inequalities (i.e., digital divide);

  4. Technical assurance:

    1. AI technologies must comply with general International standards for medical devices (i.e., IEC 62304);

    2. Each technology should pass a dedicated testing plan, including validation, load testing, penetration testing, integration testing and bias testing among the procedures;

  5. Clinical safety:

    1. Safety assessment must be performed by-design under the supervision of an expert clinician, in order to avoid post-hoc interventions once the technology is available on the market;

    2. A safety report must be released before introducing the technology in the market, along with risk management activities, hazard log and management protocols across the entire technology life cycle;

  6. Data protection:

    1. Developers must clearly demonstrate how a certain device collects, stores and processes data in safety, fairness and legality;

    2. Data use must be proportional and justifiable: why these data are employed for the interest of patients and citizens (and not other data); why data are proportional to the specific goal pursued by the technology (the Ockham razor). Some guide questions can help comply with these task: is it necessary to employ personal data? Is it necessary to process data in this way? Could anonymous data be employed to reach the same goal? Are additional data collected beyond those necessary to reach the goal? Do advantages substantially overcome disadvantages?

  7. Interoperability and open standards:

Fragmented technologies facilitate data waste and data fragmentation can be harmful for patients; data sets must be standardized and interoperable across each facility which generated or employs information, including health and social care.

In order to preserve + privacy and minimize the risk of data leaks, Jackson et al. suggest the adoption of federate machine learning: rather than teaching machines outside protected environments, it should be considered to teach machines inside these protected environments, and work on aggregated content at a later stage [32]. More in general, however, not only guidelines and framework need to be translated into effective local arrangements, but ethical issues need to be internalized across the entire cycle of care.

Conclusions

The authors of this paper strongly agree with Jackson et al. that ethics must not be reduced to the respect of rules, as more rules can be symptomatic of poor ethical attitude, just like ethically questionable actions can be justified and pursued by legal means. Too complicated rules are more easily neglected or violated, missing out on great opportunities for societal development and individual wellbeing. Few clear rules, substantial clarity of purpose, simplification, flexibility and AI professional training probably represent a better approach that both the European framework and UK guidelines seem to have caught.

The regulations discussed here may not be exhaustive of all the efforts that may be underway in different countries, nor may be the discussion exhaustive of all the ethical issues arising in the use of AI technologies in pathology and laboratory medicine; yet they provide a clear baseline to address and stimulate the forthcoming scientific debate.


Corresponding author: Federico Pennestrì, IRCCS Istituto Ortopedico Galeazzi, 20161 Milan, Lombardia, Italy, E-mail:

  1. Research funding: None declared.

  2. Author contributions: All authors have accepted responsibility for the entire content of this manuscript and approved its submission.

  3. Competing interests: Authors state no conflict of interest.

  4. Informed consent: Not applicable.

  5. Ethical approval: Not applicable.

References

1. Pennestrì, F, Banfi, G. Value-based healthcare: the role of laboratory medicine. Clin Chem Lab Med 2019;57:798–801. https://doi.org/10.1515/cclm-2018-1245.Search in Google Scholar PubMed

2. Grmek, M. Western medical thought from antiquity to the middle ages. Cambridge, Massachussets: Harvard University Press; 1988.Search in Google Scholar

3. Kluytmans, A, Tummers, M, van der Wilt, GJ, Grutters, J. Early assessment of proof-of-problem to guide health innovation. Value Health 2019;22:601–6. https://doi.org/10.1016/j.jval.2018.11.011.Search in Google Scholar PubMed

4. Davini, O. Il prezzo della salute. Roma: Nutrimenti; 2013.Search in Google Scholar

5. Organization for Economic Cooperation and Development. Health at a glance 2021: OECD indicators. Paris: OECD Publishing; 2021.Search in Google Scholar

6. Ledley, RS, Lusted, LB. Reasoning foundations of medical diagnosis; symbolic logic, probability, and value theory aid our understanding of how physicians reason. Science 1959;130:9–21. https://doi.org/10.1126/science.130.3366.9.Search in Google Scholar PubMed

7. Liu, X, Faes, L, Kale, AU, Wagner, SK, Fu, DJ, Bruynseels, A, et al.. A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis. Lancet Digit Health 2019;1:e271–97. https://doi.org/10.1016/s2589-7500(19)30123-2.Search in Google Scholar

8. Birkhoff, DC, van Dalen, ASHM, Schijven, MP. A review on the current applications of artificial intelligence in the operating room. Surg Innovat 2021;28:611–9. https://doi.org/10.1177/1553350621996961.Search in Google Scholar PubMed PubMed Central

9. Yu, KH, Beam, AL, Kohane, IS. Artificial intelligence in healthcare. Nat Biomed Eng 2018;2:719–31. https://doi.org/10.1038/s41551-018-0305-z.Search in Google Scholar PubMed

10. Felländer-Tsai, L. AI ethics, accountability, and sustainability: revisiting the Hippocratic oath. Acta Orthop 2020;91:1–2.10.1080/17453674.2019.1682850Search in Google Scholar PubMed PubMed Central

11. Dalton-Brown, S. The ethics of medical AI and the physician-patient relationship. Camb Q Healthc Ethics 2020;29:115–21. https://doi.org/10.1017/s0963180119000847.Search in Google Scholar PubMed

12. Ferrario, A, Loi, M. Algorithm, machine learning and artificial intelligence. Social science research network; 2021. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3817377 [Accessed 3 Feb 2022].10.2139/ssrn.3817377Search in Google Scholar

13. Rainey, S, Erden, YJ, Resseguier, A. AIM, philosophy and ethics. In: Lidströmer, N, Ashrafian, H, editors. Artificial intelligence in medicine. Cham: Springer; 2021.10.1007/978-3-030-58080-3_243-1Search in Google Scholar

14. De Micco, F, De Benedictis, A, Fineschi, V, Frati, P, Ciccozzi, M, Pecchia, L, et al.. From syndemic lesson after COVID-19 pandemic to a “systemic clinical risk management” proposal in the perspective of the ethics of job well done. Int J Environ Res Publ Health 2021;19:15. https://doi.org/10.3390/ijerph19010015.Search in Google Scholar PubMed PubMed Central

15. Brinati, D, Ronzio, L, Cabitza, F, Banfi, G. Artificial intelligence in laboratory medicine. In: Lidströmer, N, Ashrafian, H, editors. Artificial intelligence in medicine. Cham: Springer; 2021.10.1007/978-3-030-64573-1_312Search in Google Scholar

16. World Health Organization. Global patient safety action plan 2021–2030: towards eliminating avoidable harm in health care. Geneva: World Health Organization; 2021.Search in Google Scholar

17. Organization for Economic Cooperation and Development. Laying the foundations of artificial intelligence in health. OECD Working Paper No. 128. http://www.oecd.org/els/health-systems/health-working-papers.htm [Accessed 3 Feb 2022].Search in Google Scholar

18. European Commission. Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions. Building trust in human-centric artificial intelligence. https://ec.europa.eu/jrc/communities/en/community/digitranscope/document/building-trust-human-centric-artificial-intelligence [Accessed 3 Feb 2022].Search in Google Scholar

19. Cheshire, JWP. Loopthink: a limitation of medical artificial intelligence. Ethics Med 2017;33:7–12.Search in Google Scholar

20. European Commission. Proposal for a regulation of the European Parliament and of the Council. Lating down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts; 2021. https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:52021PC0206&from=IT [Accessed 3 Feb 2022].Search in Google Scholar

21. Independent High-Level Expert Group on Artificial Intelligence. Ethics guidelines for trustworthy AI; 2019. https://op.europa.eu/en/publication-detail/-/publication/d3988569-0434-11ea-8c1f-01aa75ed71a1 [Accessed 3 Feb 2022].Search in Google Scholar

22. European Commission. White paper on artificial intelligence – a European approach to excellence and trust; 2020. https://ec.europa.eu/info/sites/default/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf [Accessed 3 Feb 2022].Search in Google Scholar

23. United Kingdom Government Department of Health & Social Care. A guide to good practice for digital and data-driven health technologies; 2021. https://www.gov.uk/government/publications/code-of-conduct-for-data-driven-health-and-care-technology/initial-code-of-conduct-for-data-driven-health-and-care-technology [Accessed 3 Feb 2022].Search in Google Scholar

24. Cabitza, F, Campagner, A, Ferrari, D, Di Resta, C, Ceriotti, D, Sabetta, E, et al.. Development, evaluation, and validation of machine learning models for COVID-19 detection based on routine blood tests. Clin Chem Lab Med 2020;59:421–31. https://doi.org/10.1515/cclm-2020-1294.Search in Google Scholar PubMed

25. Campagner, A, Carobene, A, Cabitza, F. External validation of machine learning models for COVID-19 detection based on complete blood count. Health Inf Sci Syst 2021;9:37. https://doi.org/10.1007/s13755-021-00167-3.Search in Google Scholar PubMed PubMed Central

26. Badrick, T, Banfi, G, Bietenbeck, A, Cervinski, MA, Loh, TP, Sikaris, K. Machine learning for clinical chemists. Clin Chem 2019;65:1350–6. https://doi.org/10.1373/clinchem.2019.307512.Search in Google Scholar PubMed

27. Cabitza, F, Banfi, G. Machine learning in laboratory medicine: waiting for the flood? Clin Chem Lab Med 2018;56:516–24. https://doi.org/10.1515/cclm-2017-0287.Search in Google Scholar PubMed

28. Logical Observation Identifiers Names and Codes. LOINC term basics. https://loinc.org/get-started/loinc-term-basics/ [Accessed 24 Feb 2022].Search in Google Scholar

29. Beauchamp, TL, Childress, JF. Principles of biomedical ethics. New York: Oxford University Press; 1979.Search in Google Scholar

30. Banfi, G. Utilizzo del materiale biologico residuo nel laboratorio clinico. Biochim Clin 2021;45:408–11.Search in Google Scholar

31. Briscoe, F, Ajunwa, I, Gaddis, A, McCormick, J. Evolving public views on the value of one’s DNA and expectations for genomic database governance: results from a national survey. PLoS One 2020;15:e0229044. https://doi.org/10.1371/journal.pone.0229044.Search in Google Scholar PubMed PubMed Central

32. Jackson, BR, Ye, Y, Crawford, JM, Becich, MJ, Roy, S, Botkin, JR, et al.. The ethics of artificial intelligence in pathology and laboratory medicine: principles and practice. Acad Pathol 2021;8:2374289521990784. https://doi.org/10.1177/2374289521990784.Search in Google Scholar PubMed PubMed Central

33. Reuters. Tesla and U.S. regulators strongly criticized over role of autopilot in crash; 2020. https://www.reuters.com/article/uk-tesla-crash-idINKBN20J2II [Accessed 24 Feb 2022].Search in Google Scholar

34. Hatherley, JJ. Limits of trust in medical AI. J Med Ethics 2020;46:478–81. https://doi.org/10.1136/medethics-2019-105935.Search in Google Scholar PubMed

35. Ferrario, A, Loi, M, Viganò, E. Trust does not need to be human: it is possible to trust medical AI. J Med Ethics 2020;47:437–8. https://doi.org/10.1136/medethics-2020-106922.Search in Google Scholar PubMed PubMed Central

36. Daniels, N. Just health. Meeting health needs fairly. Cambridge: Cambridge University Press; 2008.10.1017/CBO9780511809514Search in Google Scholar

37. Berkman, BE, Hull, SC. The “right not to know” in the genomic era: time to break from tradition? Am J Bioeth 2014;14:28–31. https://doi.org/10.1080/15265161.2014.880313.Search in Google Scholar PubMed PubMed Central

38. Stanford Encyclopedia of Philosophy. Ockham’s razor. https://plato.stanford.edu/entries/ockham/#OckhRazo [Accessed 3 Feb 2022].Search in Google Scholar

39. Petersen, C, Berner, ES, Embi, PJ, Fultz Hollis, K, Goodman, KW, Koppel, R, et al.. AMIA’s code of professional and ethical conduct 2018. J Am Med Inf Assoc 2018;25:1579–82. https://doi.org/10.1093/jamia/ocy092.Search in Google Scholar PubMed PubMed Central

40. Hedlund, J, Eklund, A, Lundström, C. Key insights in the AIDA community policy on sharing of clinical imaging data for research in Sweden. Sci Data 2020;7:331. https://doi.org/10.1038/s41597-020-00674-0.Search in Google Scholar PubMed PubMed Central

Received: 2022-02-04
Accepted: 2022-03-18
Published Online: 2022-04-12
Published in Print: 2022-11-25

© 2022 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 19.4.2024 from https://www.degruyter.com/document/doi/10.1515/cclm-2022-0096/html
Scroll to top button