Intended for healthcare professionals

Analysis Artificial Intelligence and Covid-19

Integrating artificial intelligence in bedside care for covid-19 and future pandemics

BMJ 2021; 375 doi: https://doi.org/10.1136/bmj-2021-068197 (Published 31 December 2021) Cite this as: BMJ 2021;375:e068197

Read our Artificial Intelligence and covid-19 collection

  1. Michael Yu, methodologist1,
  2. An Tang, professor of radiology2,
  3. Kip Brown, research associate1,
  4. Rima Bouchakri, data scientist1,
  5. Pascal St-Onge, health data science programme manager1,
  6. Sheng Wu, technical officer3,
  7. John Reeder, director, special programme for research and training in tropical diseases3,
  8. Louis Mullie, intensivist1,
  9. Michaël Chassé, deputy scientific director of health data science1
  1. 1Center for Integration and Analysis of Medical Data (CITADEL), Department of Medicine, University of Montreal, Centre de Recherche du Centre Hospitalier de l’Université de Montréal, Montreal, Canada
  2. 2Department of Radiology, Centre Hospitalier de l’Université de Montréal (CHUM), Canada
  3. 3World Health Organization, Geneva, Switzerland
  1. Correspondence to: M Chassé michael.chasse.med@ssss.gouv.qc.ca

Michael Yu and colleagues examine the challenges in developing AI tools for use at point of care

The covid-19 pandemic created unprecedented challenges for both clinicians and healthcare institutions. Adapting to a rapidly emerging disease while facing staff and material shortages prompted difficult decisions on how best to allocate resources. Artificial intelligence (AI) rapidly moved to the forefront of the effort to adapt our healthcare systems to coping with covid-19. Hundreds of new models were developed, promising best solutions for all aspects of patient care from diagnostics to therapeutics and logistics. Yet only a small minority of these models were deployed, and none became widely adopted.12 We argue that the covid-19 pandemic exposed flaws in the technological, institutional, and ethical foundations upon which AI must build to considerably improve bedside care. If AI is to be part of a rapid response to future health crises, the challenges that it faced during the covid-19 pandemic must be carefully analysed and overcome.

AI is a branch of computer science that uses data and algorithms to extract meaning in a way that is characteristic of intelligent beings—that is, turning data into effective decision making processes. Research applications of AI in medicine have already emerged far and wide—for example, in drug discovery and modelling of complex biological systems. By contrast, efforts to integrate AI into everyday clinical care have had minimal success, despite the comparatively simple nature of the problems: optimising patient trajectories, maximising use of existing facilities, or determining when and how to reallocate resources. We surmise that this translational gap, which was magnified by the covid-19 pandemic, is due to the nature of the underlying data, the infrastructure through which they emerge, and the human context in which they occur. By understanding the influence of these factors on the chances of success of AI, healthcare systems can improve their readiness for future crises.

Development challenges

Many high quality data are essential for training an AI model, during which patterns are identified from observations, and are developed into a generalisable model for prediction. The covid-19 pandemic disclosed significant differences in the ability of institutions to provide data rapidly in standardised formats, suitable for training AI models. Even within a single institution, anomalous events, such as the start of a pandemic, can limit the availability of relevant data. These problems are compounded when collaboration extends beyond institutional, provincial, and national borders—often forcing researchers to choose between large datasets of superficial information, and smaller datasets that delve into more detail. Failure to procure data of sufficient quality or quantity can make the development of an AI model challenging, and lead to biases and erroneous conclusions.

Bedside clinical care relies on inputs from many inter-related processes, which often straddle disjointed data systems. These data systems must be reconciled before they can be used for training AI models. Data might originate from patient monitoring devices, observations entered by workers into electronic health records, or known constraints on human and material resources. Manual entry might be unavoidable to capture data that are known only to patients, care givers, or hospital decision makers. Existing systems might need to be adapted to emergency practices, such as the repurposing of units to create temporary intensive care facilities. Regardless of the initial source of data, interoperability is critical. Institutions should promote efforts to adopt well established standard terminologies, in order to aid data quality assessments and collaboration. Data standardisation is a cumbersome and rate limiting process, which should be started well before downstream applications.

If a particular event under study is rare, cooperation between institutions might be essential to make the use of AI models feasible. For example, early in the pandemic, the modest number of patients with covid-19 treated at individual institutions was insufficient to train robust models for prediction of patient trajectories and outcomes.34 Pooling patient level data from many institutions to create larger datasets—the traditional approach in clinical research—proved to be impractical, owing to time consuming ethical and legislative considerations. Here, AI offers several interesting new solutions. “Few-shot learning” (making predictions based on a limited number of samples) is an approach that uses readily available data (eg, historical information) to make generalisations about unseen data—for example, allowing the development of a covid-19 diagnostic model using a small number of training cases.5 Federated learning is another approach that relies on sharing models, rather than data, across institutions. This strategy allows pooling of knowledge without exchange of sensitive information.6 Much like data standardisation, these approaches require a high level of technological preparedness to be ready for use during a health crisis.

Training with a sufficient quantity of high quality data does not guarantee that AI models will generalise well. Similar concepts might be defined differently across institutions: for instance, operations categorised as “urgent” might imply different types of care across different hospitals. Recent work has attempted to create models that account for such differences.7 These challenges are magnified across geographical areas, where concepts might need to be translated between languages, and adjustments made for baseline patient characteristics and technological capabilities. In some instances, differences in technological capabilities could prevent low income countries from benefiting from AI, or require specialised approaches towards knowledge transfer. Federated learning might improve model generalisability, because it allows for localisation when part of a model is trained locally. The performance trade-offs of institution specific approaches should be weighed against more generic AI solutions.8

Undoubtedly, the overarching solution to the challenges described in this section is to enhance coordination and knowledge sharing among researchers working on AI models for bedside care. A living systematic review of AI models pertaining to covid-19, as of August 2021, identified 27 models for disease progression.9 Another systematic review, published in March 2021, evaluated 62 studies involving the use of chest computed tomography for prediction of covid-19 disease progression.10 Both reviews found methodological flaws and risks of bias in almost all studies, attributing this, at least in part, to lack of coordination. To meet systemic demands, and to better guide the direction of AI research, institutions should support efforts to move collaboration beyond recognising common features and weakness across studies towards a deeper synthesis of best practices and lessons learnt to actively improve future research. The covid-19 pandemic has bolstered AI workshops, including notable efforts in the United Kingdom.2 These workshops present a key opportunity to develop solid guidance for AI solution developers, and to create a stronger, more tightly knit community.

Requirements for deployment and adoption

To achieve a sustainable impact, researchers into AI should look beyond model development and consider how solutions can be practically and ethically implemented at the bedside. This approach demands a broader perspective that ensures integration with hospital systems, satisfies ethical standards to safeguard patients, and adapts to existing workflows in a way that acknowledges and leverages clinical expertise. If AI researchers do not adapt their work to real world clinical contexts, they risk producing models that are irrelevant, infeasible, or irresponsible to implement. One challenge is upgrading hospitals systems to support trained AI models. For example, diagnostic models for early detection of lung disease might need to be integrated with picture archiving and dictation systems. Upgrading these systems can be difficult or unwise during a health crisis, suggesting that health institutions would benefit from investing in these infrastructure upgrades before the demand arises.

AI also needs to be guided by clear ethical standards.1112 For example, researchers recently discovered unintentional racial biases built into decision making algorithms used to recommend care in the United States.13 Pressure to develop AI models quickly heightens the risks of exacerbating inequalities within and across borders.12 Efforts to establish standards for research involving AI have progressed, including the CONSORT-AI extension for clinical trials using AI,14 and developing work on TRIPOD-AI for reporting guidelines.15 These standards will enhance transparency and reproducibility, which had remained wanting up to this point.16 As with diagnostic tests, medical devices, and therapeutics, translation of AI from bench to bedside should follow evidence based clinical evaluation standards. Faced with the pressures of a pandemic, these standards are crucial to ensuring more responsible model development.

To be used at the bedside, AI models must also achieve clinical acceptance through mutual, effective knowledge sharing between AI researchers and clinicians. As the potential impact of AI at the bedside increases, medical educators should consider introducing the basics of data science and machine learning to all clinicians.17 Meanwhile, AI researchers can develop a better understanding of clinical needs and avoid developing black box technologies as much as possible. Thus adaptation is required, such as developing more interpretable models, following the approach of explainable AI.181920 The acceptance and usability of AI models can also be improved by working with clinicians to design models that supplement—rather than replace—clinical judgment.21 Additionally, AI researchers should integrate other tools in clinical decision making, when appropriate. For example, predictions from an AI model can be enhanced when embedded in a decision support system that strengthens “what if” scenarios, allowing the user to explore alternative strategies to treat patients when beds in the intensive care unit are at capacity.22 Disruption to clinical processes caused by the covid-19 pandemic should be seen as an opportunity for clinicians and health institutions to rethink their practices.

Patient acceptance of AI at the bedside must be earned through all stages of development and implementation. This requires guiding AI research to take account of patient values, and deal with legitimate concerns about a technology that can magnify biases and inequalities in the healthcare system.12 A critical first step towards this goal is to participate in cross sectional and intersectional research to understand the diversity of patient perspectives across factors such as sex, age, wealth, race and ethnicity, and health status. Subsequently, guidelines will need to incorporate such perspectives into enforceable policies that can help to govern the design, development, implementation, maintenance, and oversight of AI as applied to bedside care.

Enabling robust, agile, responsible AI at the bedside

AI has yet to produce a single widely accepted tool for bedside care of patients with covid-19, reflecting gaps in research, technical details, and the policy environment surrounding it rather than a failure in the field. A considerable amount of work remains to be done before solid foundations are laid for deployment of AI at the bedside.

Researchers in AI can benefit from a more expansive view of the problems they are solving, by considering the work of other researchers, and the full decision making and technical context in which their solutions are to be deployed. Developing networks with other AI researchers, clinicians, hospital staff, and patients can guide research efforts towards high impact areas. During a pandemic, establishment of these networks can be used to accelerate research, while acting as a safeguard against potential oversights.

On the technical side, health institutions need to take advantage of crisis free periods to develop the infrastructure required—including efforts to promote standardisation and interoperability. Adopting, promoting, and enhancing standards such as the Fast Healthcare Interoperability Resources, which is used to capture institutional data, helps to make AI more readily transferable, and supports multisite research.23 Coordination across institutions is also fundamental, and many initiatives have been spurred on by the covid-19 pandemic, such as OpenSafely in the UK or CODA-19 in Canada.2425

Finally, clear guidelines are required for AI research and deployment. To achieve clear guidelines requires a large scale effort to understand the diverse values and needs of those who might be affected by AI at the bedside, including patients, care givers, and hospital administrators. This process should be followed by the creation of enforceable standards, involving multistage evaluations of AI models, which can serve as guiding principles for model development and deployment.

Creating this research, technical, and policy infrastructure requires a substantial effort, although one that is already under way. We are confident that AI researchers already recognise the need to move the focus beyond individual projects, and towards developing translational tools that can guide AI research into clinical practice.

Key messages

  • Despite substantial research, artificial intelligence (AI) has had a limited effect on bedside care during the covid-19 pandemic

  • To develop AI tools for bedside care and hospital management, researchers require standards for integrating data from disparate sources, and software that can process data in real time

  • Variations in healthcare institution policies, resources, expertise, patient populations and differences between prepandemic and postpandemic operations require careful consideration in generalising AI solutions

  • Engaging AI researchers, healthcare providers, ethicists, and other experts is necessary to develop standards for establishing when it is appropriate to deploy AI tools and to integrate AI into existing workflows

Acknowledgments

We thank the Consortium Santé Numérique de l’Université de Montréal, especially Kamran Afzali, Pascale Beliveau, Khedidja Seridi, and Yves Terrat, for their insights, review, and comments on this article.

Footnotes

  • Contributors and sources:MY, MC, and LM conceptualised and wrote the first draft of the manuscript. SW and JR contributed to the reflection on clinical experience and evidence based policy making and helped to contextualise this work with the WHO research agenda related to artificial intelligence. All authors provided scientific, methodological, and content expertise. All authors reviewed, edited, and approved the final version of the manuscript. MC and LM are guarantors. SW and JR are staff members of WHO. The authors alone are responsible for the views expressed in this article and they do not necessarily represent the decisions, policy, or views of WHO.

  • Competing interests: We have read and understood BMJ policy on declaration of interests and have no relevant interests to declare.

  • Provenance and peer review: Commissioned; externally peer reviewed.

  • This collection of articles was proposed by WHO Department of Digital Health and Innovation and commissioned by The BMJ. The BMJ retained full editorial control over external peer review, editing, and publication of these articles. Open access fees were funded by WHO.

This is an Open Access article distributed under the terms of the Creative Commons Attribution IGO License (https://creativecommons.org/licenses/by-nc/3.0/igo/), which permits use, distribution, and reproduction for non-commercial purposes in any medium, provided the original work is properly cited.

References